Block Merge Sort, též známý jako WikiSort je rychlý a stabilní O(n log n) řadící algoritmus, který používá O(1) paměti, navržený Mikem McFaddenem.
Algoritmus je ještě rychlejší, je-li vstup částečně setříděn, nebo může-li použít větší pole. Může být také modifikován tak, aby použil další dodatečnou paměť a tím zvýšil svou rychlost.
Block Merge Sort jak název napovídá se skládá z rozložení daného seznamu prvků do bloků, jejich setřídění a následnému slévání zpět. Aby dosáhl asymptotické složitosti O(n log n), kombinuje Block Merge Sort vždy alespoň dvě operace Merge sort a Insertion sort. Své jméno tedy dostal z porovnání dvou setříděných seznamů A a B, což je ve skutečnosti ekvivalentní rozložení seznamu A na rovnoměrné části nazývané Bloky (Block...), vložení každého bloku A do B dle speciálních pravidel a slévání (...Merge...) párů AB (...Sort). Jeden praktický algoritmus, využívající Block Merge Sort byl uveřejněn v roce 2008 [Pok-Son Kim a Arne Kutzner].
Vizualizace
Popis algoritmu
Vnější smyčka Block Sortu je identická s Merge Sortem a jeho řazením odspodu nahoru, kde každá úroveň třídění slévá páry pomocných polí A a B, dle velikosti 1 pak 2, pak 4, 8, 16 a tak dál, dokud nejsou obě pomocná pole zkombinována do samostatného pole. Blokový algoritmus slévání však raději namísto přímého slévání A a B rozdělí A na diskrétní bloky velikosti √A (výsledkem je √A bloků), vloží každý blok A do B tak, aby první hodnota každého bloku A byla menší nebo rovna hodnotě B, která se nachází přímo za ní. Pak lokálně slévá každý blok A s každou hodnotou B, která je mezi ní a následujícím blokem A. Jelikož slévání stále vyžaduje buffer, dostatečně velký na uložení slévaného bloku A, jsou k tomuto účelu rezervovány dvě oblasti v poli (známé jako interní buffer).
První dva bloky jsou proto modifikovány, aby obsahovaly první instanci každé hodnoty uvnitř A s originálním obsahem těchto bloků, které jsou dle nutnosti přehozeny. Zbývající bloky A jsou pak vloženy do B a slévány pomocí jednoho ze dvou bufferů jako swapový prostor. Tento postup provede přeuspořádání hodnot v bufferu. Jakmile jsou slity všechny bloky A a B každé části pole (pomocného pole) A a B pro danou úroveň MargeSortu, hodnoty v bufferu musí být setříděny, aby obnovily původní pořadí, takže musí přijít na řadu vkládání (Insertion). hodnoty v bufferu jsou pak přeuspořádány do jejich prvních setříděných pozic v poli. Tento postup se opakuje pro každou úroveň vnějšího třídění se sléváním odspodu nahoru. V tomto bodě již bude pole stabilně setříděno.
Analýza
BlockSort je velmi dobře navržen a velmi dobře testovatelný algoritmus. Na Internetu se nachází dostatečné množství různých implementací. Díky tomu lze jeho charakteristiky skvěle změřit. BlockSort je adaptivním tříděním ve dvou rovinách: nejprve přeskočí slévání pomocných polí A a B, pokud jsou již setříděny. Dále pak při slévání A a B, které jsou rozloženy na rovnoměrně velké bloky jsou bloky A přesunuty za B tak daleko, jak jen je to nutné a každý blok je sléván pouze s hodnotami B, které se nalézají ihned za nimi. Čím více jsou původní data setříděna, tím méně bude hodnot B, které je potřeba slévat do bloku A.
Výhody
BlockSort je stabilním třídícím algoritmem, který nevyžaduje dodatečnou počítačovou paměť, což je užitečné zejména, pokud nemáme dostatečně velkou paměť k alokování bufferu O(n). Pokud použijeme variantu BlockSortu s externím bufferem, můžeme paměť snižovat z výchozí O(n) na postupně menší a menší buffery a algoritmus bude nadále fungovat efektivně.
Nevýhody
BlockSort nevyužívá řazených rozsahů tak efektivně jako jiné algoritmy, například Timsort. BlockSort kontroluje tyto setříděné rozsahy pouze ve dvou úrovních: jako pomocná pole A a B a jako bloky A a B. Je také v porovnání s algoritmem Merge Sort hůře implementovatelný a nelze tak jednoduše paralelizovat.
Kód
/*********************************************************** WikiSort (public domain license) https://github.com/BonzaiThePenguin/WikiSort to run: javac WikiSort.java java WikiSort ***********************************************************/ // the performance of WikiSort here seems to be completely at the mercy of the JIT compiler // sometimes it's 40% as fast, sometimes 80%, and either way it's a lot slower than the C code import java.util.*; import java.lang.*; import java.io.*; // class to test stable sorting (index will contain its original index in the array, to make sure it doesn't switch places with other items) class Test { public int value; public int index; } class TestComparator implements Comparator{ static int comparisons = 0; public int compare(Test a, Test b) { comparisons++; if (a.value < b.value) return -1; if (a.value > b.value) return 1; return 0; } } // structure to represent ranges within the array class Range { public int start; public int end; public Range(int start1, int end1) { start = start1; end = end1; } public Range() { start = 0; end = 0; } void set(int start1, int end1) { start = start1; end = end1; } int length() { return end - start; } } class Pull { public int from, to, count; public Range range; public Pull() { range = new Range(0, 0); } void reset() { range.set(0, 0); from = 0; to = 0; count = 0; } } // calculate how to scale the index value to the range within the array // the bottom-up merge sort only operates on values that are powers of two, // so scale down to that power of two, then use a fraction to scale back again class Iterator { public int size, power_of_two; public int numerator, decimal; public int denominator, decimal_step, numerator_step; // 63 -> 32, 64 -> 64, etc. // this comes from Hacker's Delight static int FloorPowerOfTwo(int value) { int x = value; x = x | (x >> 1); x = x | (x >> 2); x = x | (x >> 4); x = x | (x >> 8); x = x | (x >> 16); return x - (x >> 1); } Iterator(int size2, int min_level) { size = size2; power_of_two = FloorPowerOfTwo(size); denominator = power_of_two/min_level; numerator_step = size % denominator; decimal_step = size/denominator; begin(); } void begin() { numerator = decimal = 0; } Range nextRange() { int start = decimal; decimal += decimal_step; numerator += numerator_step; if (numerator >= denominator) { numerator -= denominator; decimal++; } return new Range(start, decimal); } boolean finished() { return (decimal >= size); } boolean nextLevel() { decimal_step += decimal_step; numerator_step += numerator_step; if (numerator_step >= denominator) { numerator_step -= denominator; decimal_step++; } return (decimal_step < size); } int length() { return decimal_step; } } class WikiSorter { // use a small cache to speed up some of the operations // since the cache size is fixed, it's still O(1) memory! // just keep in mind that making it too small ruins the point (nothing will fit into it), // and making it too large also ruins the point (so much for "low memory"!) private static int cache_size = 512; private T[] cache; // note that you can easily modify the above to allocate a dynamically sized cache // good choices for the cache size are: // (size + 1)/2 – turns into a full-speed standard merge sort since everything fits into the cache // sqrt((size + 1)/2) + 1 – this will be the size of the A blocks at the largest level of merges, // so a buffer of this size would allow it to skip using internal or in-place merges for anything // 512 – chosen from careful testing as a good balance between fixed-size memory use and run time // 0 – if the system simply cannot allocate any extra memory whatsoever, no memory works just fine public WikiSorter() { @SuppressWarnings("unchecked") T[] cache1 = (T[])new Object[cache_size]; if (cache1 == null) cache_size = 0; else cache = cache1; } public static void sort(T[] array, Comparator comp) { new WikiSorter ().Sort(array, comp); } // toolbox functions used by the sorter // find the index of the first value within the range that is equal to array[index] int BinaryFirst(T array[], T value, Range range, Comparator comp) { int start = range.start, end = range.end - 1; while (start < end) { int mid = start + (end - start)/2; if (comp.compare(array[mid], value) < 0) start = mid + 1; else end = mid; } if (start == range.end - 1 && comp.compare(array[start], value) < 0) start++; return start; } // find the index of the last value within the range that is equal to array[index], plus 1 int BinaryLast(T array[], T value, Range range, Comparator comp) { int start = range.start, end = range.end - 1; while (start < end) { int mid = start + (end - start)/2; if (comp.compare(value, array[mid]) >= 0) start = mid + 1; else end = mid; } if (start == range.end - 1 && comp.compare(value, array[start]) >= 0) start++; return start; } // combine a linear search with a binary search to reduce the number of comparisons in situations // where have some idea as to how many unique values there are and where the next value might be int FindFirstForward(T array[], T value, Range range, Comparator comp, int unique) { if (range.length() == 0) return range.start; int index, skip = Math.max(range.length()/unique, 1); for (index = range.start + skip; comp.compare(array[index - 1], value) < 0; index += skip) if (index >= range.end - skip) return BinaryFirst(array, value, new Range(index, range.end), comp); return BinaryFirst(array, value, new Range(index - skip, index), comp); } int FindLastForward(T array[], T value, Range range, Comparator comp, int unique) { if (range.length() == 0) return range.start; int index, skip = Math.max(range.length()/unique, 1); for (index = range.start + skip; comp.compare(value, array[index - 1]) >= 0; index += skip) if (index >= range.end - skip) return BinaryLast(array, value, new Range(index, range.end), comp); return BinaryLast(array, value, new Range(index - skip, index), comp); } int FindFirstBackward(T array[], T value, Range range, Comparator comp, int unique) { if (range.length() == 0) return range.start; int index, skip = Math.max(range.length()/unique, 1); for (index = range.end - skip; index > range.start && comp.compare(array[index - 1], value) >= 0; index -= skip) if (index < range.start + skip) return BinaryFirst(array, value, new Range(range.start, index), comp); return BinaryFirst(array, value, new Range(index, index + skip), comp); } int FindLastBackward(T array[], T value, Range range, Comparator comp, int unique) { if (range.length() == 0) return range.start; int index, skip = Math.max(range.length()/unique, 1); for (index = range.end - skip; index > range.start && comp.compare(value, array[index - 1]) < 0; index -= skip) if (index < range.start + skip) return BinaryLast(array, value, new Range(range.start, index), comp); return BinaryLast(array, value, new Range(index, index + skip), comp); } // n^2 sorting algorithm used to sort tiny chunks of the full array void InsertionSort(T array[], Range range, Comparator comp) { for (int j, i = range.start + 1; i < range.end; i++) { T temp = array[i]; for (j = i; j > range.start && comp.compare(temp, array[j - 1]) < 0; j--) array[j] = array[j - 1]; array[j] = temp; } } // reverse a range of values within the array void Reverse(T array[], Range range) { for (int index = range.length()/2 - 1; index >= 0; index--) { T swap = array[range.start + index]; array[range.start + index] = array[range.end - index - 1]; array[range.end - index - 1] = swap; } } // swap a series of values in the array void BlockSwap(T array[], int start1, int start2, int block_size) { for (int index = 0; index < block_size; index++) { T swap = array[start1 + index]; array[start1 + index] = array[start2 + index]; array[start2 + index] = swap; } } // rotate the values in an array ([0 1 2 3] becomes [1 2 3 0] if we rotate by 1) // this assumes that 0 <= amount <= range.length() void Rotate(T array[], int amount, Range range, boolean use_cache) { if (range.length() == 0) return; int split; if (amount >= 0) split = range.start + amount; else split = range.end + amount; Range range1 = new Range(range.start, split); Range range2 = new Range(split, range.end); if (use_cache) { // if the smaller of the two ranges fits into the cache, it's *slightly* faster copying it there and shifting the elements over if (range1.length() <= range2.length()) { if (range1.length() <= cache_size) { if (cache != null) { java.lang.System.arraycopy(array, range1.start, cache, 0, range1.length()); java.lang.System.arraycopy(array, range2.start, array, range1.start, range2.length()); java.lang.System.arraycopy(cache, 0, array, range1.start + range2.length(), range1.length()); } return; } } else { if (range2.length() <= cache_size) { if (cache != null) { java.lang.System.arraycopy(array, range2.start, cache, 0, range2.length()); java.lang.System.arraycopy(array, range1.start, array, range2.end - range1.length(), range1.length()); java.lang.System.arraycopy(cache, 0, array, range1.start, range2.length()); } return; } } } Reverse(array, range1); Reverse(array, range2); Reverse(array, range); } // merge two ranges from one array and save the results into a different array void MergeInto(T from[], Range A, Range B, Comparator comp, T into[], int at_index) { int A_index = A.start; int B_index = B.start; int insert_index = at_index; int A_last = A.end; int B_last = B.end; while (true) { if (comp.compare(from[B_index], from[A_index]) >= 0) { into[insert_index] = from[A_index]; A_index++; insert_index++; if (A_index == A_last) { // copy the remainder of B into the final array java.lang.System.arraycopy(from, B_index, into, insert_index, B_last - B_index); break; } } else { into[insert_index] = from[B_index]; B_index++; insert_index++; if (B_index == B_last) { // copy the remainder of A into the final array java.lang.System.arraycopy(from, A_index, into, insert_index, A_last - A_index); break; } } } } // merge operation using an external buffer, void MergeExternal(T array[], Range A, Range B, Comparator comp) { // A fits into the cache, so use that instead of the internal buffer int A_index = 0; int B_index = B.start; int insert_index = A.start; int A_last = A.length(); int B_last = B.end; if (B.length() > 0 && A.length() > 0) { while (true) { if (comp.compare(array[B_index], cache[A_index]) >= 0) { array[insert_index] = cache[A_index]; A_index++; insert_index++; if (A_index == A_last) break; } else { array[insert_index] = array[B_index]; B_index++; insert_index++; if (B_index == B_last) break; } } } // copy the remainder of A into the final array if (cache != null) java.lang.System.arraycopy(cache, A_index, array, insert_index, A_last - A_index); } // merge operation using an internal buffer void MergeInternal(T array[], Range A, Range B, Comparator comp, Range buffer) { // whenever we find a value to add to the final array, swap it with the value that's already in that spot // when this algorithm is finished, 'buffer' will contain its original contents, but in a different order int A_count = 0, B_count = 0, insert = 0; if (B.length() > 0 && A.length() > 0) { while (true) { if (comp.compare(array[B.start + B_count], array[buffer.start + A_count]) >= 0) { T swap = array[A.start + insert]; array[A.start + insert] = array[buffer.start + A_count]; array[buffer.start + A_count] = swap; A_count++; insert++; if (A_count >= A.length()) break; } else { T swap = array[A.start + insert]; array[A.start + insert] = array[B.start + B_count]; array[B.start + B_count] = swap; B_count++; insert++; if (B_count >= B.length()) break; } } } // swap the remainder of A into the final array BlockSwap(array, buffer.start + A_count, A.start + insert, A.length() - A_count); } // merge operation without a buffer void MergeInPlace(T array[], Range A, Range B, Comparator comp) { if (A.length() == 0 || B.length() == 0) return; /* this just repeatedly binary searches into B and rotates A into position. the paper suggests using the 'rotation-based Hwang and Lin algorithm' here, but I decided to stick with this because it had better situational performance (Hwang and Lin is designed for merging subarrays of very different sizes, but WikiSort almost always uses subarrays that are roughly the same size) normally this is incredibly suboptimal, but this function is only called when none of the A or B blocks in any subarray contained 2√A unique values, which places a hard limit on the number of times this will ACTUALLY need to binary search and rotate. according to my analysis the worst case is √A rotations performed on √A items once the constant factors are removed, which ends up being O(n) again, this is NOT a general-purpose solution – it only works well in this case! kind of like how the O(n^2) insertion sort is used in some places */ A = new Range(A.start, A.end); B = new Range(B.start, B.end); while (true) { // find the first place in B where the first item in A needs to be inserted int mid = BinaryFirst(array, array[A.start], B, comp); // rotate A into place int amount = mid - A.end; Rotate(array, -amount, new Range(A.start, mid), true); if (B.end == mid) break; // calculate the new A and B ranges B.start = mid; A.set(A.start + amount, B.start); A.start = BinaryLast(array, array[A.start], A, comp); if (A.length() == 0) break; } } void NetSwap(T array[], int order[], Range range, Comparator comp, int x, int y) { int compare = comp.compare(array[range.start + x], array[range.start + y]); if (compare > 0 || (order[x] > order[y] && compare == 0)) { T swap = array[range.start + x]; array[range.start + x] = array[range.start + y]; array[range.start + y] = swap; int swap2 = order[x]; order[x] = order[y]; order[y] = swap2; } } // bottom-up merge sort combined with an in-place merge algorithm for O(1) memory use void Sort(T array[], Comparator comp) { int size = array.length; // if the array is of size 0, 1, 2, or 3, just sort them like so: if (size < 4) { if (size == 3) { // hard-coded insertion sort if (comp.compare(array[1], array[0]) < 0) { T swap = array[0]; array[0] = array[1]; array[1] = swap; } if (comp.compare(array[2], array[1]) < 0) { T swap = array[1]; array[1] = array[2]; array[2] = swap; if (comp.compare(array[1], array[0]) < 0) { swap = array[0]; array[0] = array[1]; array[1] = swap; } } } else if (size == 2) { // swap the items if they're out of order if (comp.compare(array[1], array[0]) < 0) { T swap = array[0]; array[0] = array[1]; array[1] = swap; } } return; } // sort groups of 4-8 items at a time using an unstable sorting network, // but keep track of the original item orders to force it to be stable // http://pages.ripco.net/~jgamble/nw.html Iterator iterator = new Iterator(size, 4); while (!iterator.finished()) { int order[] = { 0, 1, 2, 3, 4, 5, 6, 7 }; Range range = iterator.nextRange(); if (range.length() == 8) { NetSwap(array, order, range, comp, 0, 1); NetSwap(array, order, range, comp, 2, 3); NetSwap(array, order, range, comp, 4, 5); NetSwap(array, order, range, comp, 6, 7); NetSwap(array, order, range, comp, 0, 2); NetSwap(array, order, range, comp, 1, 3); NetSwap(array, order, range, comp, 4, 6); NetSwap(array, order, range, comp, 5, 7); NetSwap(array, order, range, comp, 1, 2); NetSwap(array, order, range, comp, 5, 6); NetSwap(array, order, range, comp, 0, 4); NetSwap(array, order, range, comp, 3, 7); NetSwap(array, order, range, comp, 1, 5); NetSwap(array, order, range, comp, 2, 6); NetSwap(array, order, range, comp, 1, 4); NetSwap(array, order, range, comp, 3, 6); NetSwap(array, order, range, comp, 2, 4); NetSwap(array, order, range, comp, 3, 5); NetSwap(array, order, range, comp, 3, 4); } else if (range.length() == 7) { NetSwap(array, order, range, comp, 1, 2); NetSwap(array, order, range, comp, 3, 4); NetSwap(array, order, range, comp, 5, 6); NetSwap(array, order, range, comp, 0, 2); NetSwap(array, order, range, comp, 3, 5); NetSwap(array, order, range, comp, 4, 6); NetSwap(array, order, range, comp, 0, 1); NetSwap(array, order, range, comp, 4, 5); NetSwap(array, order, range, comp, 2, 6); NetSwap(array, order, range, comp, 0, 4); NetSwap(array, order, range, comp, 1, 5); NetSwap(array, order, range, comp, 0, 3); NetSwap(array, order, range, comp, 2, 5); NetSwap(array, order, range, comp, 1, 3); NetSwap(array, order, range, comp, 2, 4); NetSwap(array, order, range, comp, 2, 3); } else if (range.length() == 6) { NetSwap(array, order, range, comp, 1, 2); NetSwap(array, order, range, comp, 4, 5); NetSwap(array, order, range, comp, 0, 2); NetSwap(array, order, range, comp, 3, 5); NetSwap(array, order, range, comp, 0, 1); NetSwap(array, order, range, comp, 3, 4); NetSwap(array, order, range, comp, 2, 5); NetSwap(array, order, range, comp, 0, 3); NetSwap(array, order, range, comp, 1, 4); NetSwap(array, order, range, comp, 2, 4); NetSwap(array, order, range, comp, 1, 3); NetSwap(array, order, range, comp, 2, 3); } else if (range.length() == 5) { NetSwap(array, order, range, comp, 0, 1); NetSwap(array, order, range, comp, 3, 4); NetSwap(array, order, range, comp, 2, 4); NetSwap(array, order, range, comp, 2, 3); NetSwap(array, order, range, comp, 1, 4); NetSwap(array, order, range, comp, 0, 3); NetSwap(array, order, range, comp, 0, 2); NetSwap(array, order, range, comp, 1, 3); NetSwap(array, order, range, comp, 1, 2); } else if (range.length() == 4) { NetSwap(array, order, range, comp, 0, 1); NetSwap(array, order, range, comp, 2, 3); NetSwap(array, order, range, comp, 0, 2); NetSwap(array, order, range, comp, 1, 3); NetSwap(array, order, range, comp, 1, 2); } } if (size < 8) return; // we need to keep track of a lot of ranges during this sort! Range buffer1 = new Range(), buffer2 = new Range(); Range blockA = new Range(), blockB = new Range(); Range lastA = new Range(), lastB = new Range(); Range firstA = new Range(); Range A = new Range(), B = new Range(); Pull[] pull = new Pull[2]; pull[0] = new Pull(); pull[1] = new Pull(); // then merge sort the higher levels, which can be 8-15, 16-31, 32-63, 64-127, etc. while (true) { // if every A and B block will fit into the cache, use a special branch specifically for merging with the cache // (we use < rather than <= since the block size might be one more than iterator.length()) if (iterator.length() < cache_size) { // if four subarrays fit into the cache, it's faster to merge both pairs of subarrays into the cache, // then merge the two merged subarrays from the cache back into the original array if ((iterator.length() + 1) * 4 <= cache_size && iterator.length() * 4 <= size) { iterator.begin(); while (!iterator.finished()) { // merge A1 and B1 into the cache Range A1 = iterator.nextRange(); Range B1 = iterator.nextRange(); Range A2 = iterator.nextRange(); Range B2 = iterator.nextRange(); if (comp.compare(array[B1.end - 1], array[A1.start]) < 0) { // the two ranges are in reverse order, so copy them in reverse order into the cache java.lang.System.arraycopy(array, A1.start, cache, B1.length(), A1.length()); java.lang.System.arraycopy(array, B1.start, cache, 0, B1.length()); } else if (comp.compare(array[B1.start], array[A1.end - 1]) < 0) { // these two ranges weren't already in order, so merge them into the cache MergeInto(array, A1, B1, comp, cache, 0); } else { // if A1, B1, A2, and B2 are all in order, skip doing anything else if (comp.compare(array[B2.start], array[A2.end - 1]) >= 0 && comp.compare(array[A2.start], array[B1.end - 1]) >= 0) continue; // copy A1 and B1 into the cache in the same order java.lang.System.arraycopy(array, A1.start, cache, 0, A1.length()); java.lang.System.arraycopy(array, B1.start, cache, A1.length(), B1.length()); } A1.set(A1.start, B1.end); // merge A2 and B2 into the cache if (comp.compare(array[B2.end - 1], array[A2.start]) < 0) { // the two ranges are in reverse order, so copy them in reverse order into the cache java.lang.System.arraycopy(array, A2.start, cache, A1.length() + B2.length(), A2.length()); java.lang.System.arraycopy(array, B2.start, cache, A1.length(), B2.length()); } else if (comp.compare(array[B2.start], array[A2.end - 1]) < 0) { // these two ranges weren't already in order, so merge them into the cache MergeInto(array, A2, B2, comp, cache, A1.length()); } else { // copy A2 and B2 into the cache in the same order java.lang.System.arraycopy(array, A2.start, cache, A1.length(), A2.length()); java.lang.System.arraycopy(array, B2.start, cache, A1.length() + A2.length(), B2.length()); } A2.set(A2.start, B2.end); // merge A1 and A2 from the cache into the array Range A3 = new Range(0, A1.length()); Range B3 = new Range(A1.length(), A1.length() + A2.length()); if (comp.compare(cache[B3.end - 1], cache[A3.start]) < 0) { // the two ranges are in reverse order, so copy them in reverse order into the cache java.lang.System.arraycopy(cache, A3.start, array, A1.start + A2.length(), A3.length()); java.lang.System.arraycopy(cache, B3.start, array, A1.start, B3.length()); } else if (comp.compare(cache[B3.start], cache[A3.end - 1]) < 0) { // these two ranges weren't already in order, so merge them back into the array MergeInto(cache, A3, B3, comp, array, A1.start); } else { // copy A3 and B3 into the array in the same order java.lang.System.arraycopy(cache, A3.start, array, A1.start, A3.length()); java.lang.System.arraycopy(cache, B3.start, array, A1.start + A1.length(), B3.length()); } } // we merged two levels at the same time, so we're done with this level already // (iterator.nextLevel() is called again at the bottom of this outer merge loop) iterator.nextLevel(); } else { iterator.begin(); while (!iterator.finished()) { A = iterator.nextRange(); B = iterator.nextRange(); if (comp.compare(array[B.end - 1], array[A.start]) < 0) { // the two ranges are in reverse order, so a simple rotation should fix it Rotate(array, A.length(), new Range(A.start, B.end), true); } else if (comp.compare(array[B.start], array[A.end - 1]) < 0) { // these two ranges weren't already in order, so we'll need to merge them! java.lang.System.arraycopy(array, A.start, cache, 0, A.length()); MergeExternal(array, A, B, comp); } } } } else { // this is where the in-place merge logic starts! // 1. pull out two internal buffers each containing √A unique values // 1a. adjust block_size and buffer_size if we couldn't find enough unique values // 2. loop over the A and B subarrays within this level of the merge sort // 3. break A and B into blocks of size 'block_size' // 4. "tag" each of the A blocks with values from the first internal buffer // 5. roll the A blocks through the B blocks and drop/rotate them where they belong // 6. merge each A block with any B values that follow, using the cache or the second internal buffer // 7. sort the second internal buffer if it exists // 8. redistribute the two internal buffers back into the array int block_size = (int)Math.sqrt(iterator.length()); int buffer_size = iterator.length()/block_size + 1; // as an optimization, we really only need to pull out the internal buffers once for each level of merges // after that we can reuse the same buffers over and over, then redistribute it when we're finished with this level int index, last, count, pull_index = 0; buffer1.set(0, 0); buffer2.set(0, 0); pull[0].reset(); pull[1].reset(); // find two internal buffers of size 'buffer_size' each int find = buffer_size + buffer_size; boolean find_separately = false; if (block_size <= cache_size) { // if every A block fits into the cache then we won't need the second internal buffer, // so we really only need to find 'buffer_size' unique values find = buffer_size; } else if (find > iterator.length()) { // we can't fit both buffers into the same A or B subarray, so find two buffers separately find = buffer_size; find_separately = true; } // we need to find either a single contiguous space containing 2√A unique values (which will be split up into two buffers of size √A each), // or we need to find one buffer of < 2√A unique values, and a second buffer of √A unique values, // OR if we couldn't find that many unique values, we need the largest possible buffer we can get // in the case where it couldn't find a single buffer of at least √A unique values, // all of the Merge steps must be replaced by a different merge algorithm (MergeInPlace) iterator.begin(); while (!iterator.finished()) { A = iterator.nextRange(); B = iterator.nextRange(); // check A for the number of unique values we need to fill an internal buffer // these values will be pulled out to the start of A for (last = A.start, count = 1; count < find; last = index, count++) { index = FindLastForward(array, array[last], new Range(last + 1, A.end), comp, find - count); if (index == A.end) break; } index = last; if (count >= buffer_size) { // keep track of the range within the array where we'll need to "pull out" these values to create the internal buffer pull[pull_index].range.set(A.start, B.end); pull[pull_index].count = count; pull[pull_index].from = index; pull[pull_index].to = A.start; pull_index = 1; if (count == buffer_size + buffer_size) { // we were able to find a single contiguous section containing 2√A unique values, // so this section can be used to contain both of the internal buffers we'll need buffer1.set(A.start, A.start + buffer_size); buffer2.set(A.start + buffer_size, A.start + count); break; } else if (find == buffer_size + buffer_size) { // we found a buffer that contains at least √A unique values, but did not contain the full 2√A unique values, // so we still need to find a second separate buffer of at least √A unique values buffer1.set(A.start, A.start + count); find = buffer_size; } else if (block_size <= cache_size) { // we found the first and only internal buffer that we need, so we're done! buffer1.set(A.start, A.start + count); break; } else if (find_separately) { // found one buffer, but now find the other one buffer1 = new Range(A.start, A.start + count); find_separately = false; } else { // we found a second buffer in an 'A' subarray containing √A unique values, so we're done! buffer2.set(A.start, A.start + count); break; } } else if (pull_index == 0 && count > buffer1.length()) { // keep track of the largest buffer we were able to find buffer1.set(A.start, A.start + count); pull[pull_index].range.set(A.start, B.end); pull[pull_index].count = count; pull[pull_index].from = index; pull[pull_index].to = A.start; } // check B for the number of unique values we need to fill an internal buffer // these values will be pulled out to the end of B for (last = B.end - 1, count = 1; count < find; last = index - 1, count++) { index = FindFirstBackward(array, array[last], new Range(B.start, last), comp, find - count); if (index == B.start) break; } index = last; if (count >= buffer_size) { // keep track of the range within the array where we'll need to "pull out" these values to create the internal buffer pull[pull_index].range.set(A.start, B.end); pull[pull_index].count = count; pull[pull_index].from = index; pull[pull_index].to = B.end; pull_index = 1; if (count == buffer_size + buffer_size) { // we were able to find a single contiguous section containing 2√A unique values, // so this section can be used to contain both of the internal buffers we'll need buffer1.set(B.end - count, B.end - buffer_size); buffer2.set(B.end - buffer_size, B.end); break; } else if (find == buffer_size + buffer_size) { // we found a buffer that contains at least √A unique values, but did not contain the full 2√A unique values, // so we still need to find a second separate buffer of at least √A unique values buffer1.set(B.end - count, B.end); find = buffer_size; } else if (block_size <= cache_size) { // we found the first and only internal buffer that we need, so we're done! buffer1.set(B.end - count, B.end); break; } else if (find_separately) { // found one buffer, but now find the other one buffer1 = new Range(B.end - count, B.end); find_separately = false; } else { // buffer2 will be pulled out from a 'B' subarray, so if the first buffer was pulled out from the corresponding 'A' subarray, // we need to adjust the end point for that A subarray so it knows to stop redistributing its values before reaching buffer2 if (pull[0].range.start == A.start) pull[0].range.end -= pull[1].count; // we found a second buffer in an 'B' subarray containing √A unique values, so we're done! buffer2.set(B.end - count, B.end); break; } } else if (pull_index == 0 && count > buffer1.length()) { // keep track of the largest buffer we were able to find buffer1.set(B.end - count, B.end); pull[pull_index].range.set(A.start, B.end); pull[pull_index].count = count; pull[pull_index].from = index; pull[pull_index].to = B.end; } } // pull out the two ranges so we can use them as internal buffers for (pull_index = 0; pull_index < 2; pull_index++) { int length = pull[pull_index].count; if (pull[pull_index].to < pull[pull_index].from) { // we're pulling the values out to the left, which means the start of an A subarray index = pull[pull_index].from; for (count = 1; count < length; count++) { index = FindFirstBackward(array, array[index - 1], new Range(pull[pull_index].to, pull[pull_index].from - (count - 1)), comp, length - count); Range range = new Range(index + 1, pull[pull_index].from + 1); Rotate(array, range.length() - count, range, true); pull[pull_index].from = index + count; } } else if (pull[pull_index].to > pull[pull_index].from) { // we're pulling values out to the right, which means the end of a B subarray index = pull[pull_index].from + 1; for (count = 1; count < length; count++) { index = FindLastForward(array, array[index], new Range(index, pull[pull_index].to), comp, length - count); Range range = new Range(pull[pull_index].from, index - 1); Rotate(array, count, range, true); pull[pull_index].from = index - 1 - count; } } } // adjust block_size and buffer_size based on the values we were able to pull out buffer_size = buffer1.length(); block_size = iterator.length()/buffer_size + 1; // the first buffer NEEDS to be large enough to tag each of the evenly sized A blocks, // so this was originally here to test the math for adjusting block_size above //if ((iterator.length() + 1)/block_size > buffer_size) throw new RuntimeException(); // now that the two internal buffers have been created, it's time to merge each A+B combination at this level of the merge sort! iterator.begin(); while (!iterator.finished()) { A = iterator.nextRange(); B = iterator.nextRange(); // remove any parts of A or B that are being used by the internal buffers int start = A.start; if (start == pull[0].range.start) { if (pull[0].from > pull[0].to) { A.start += pull[0].count; // if the internal buffer takes up the entire A or B subarray, then there's nothing to merge // this only happens for very small subarrays, like √4 = 2, 2 * (2 internal buffers) = 4, // which also only happens when cache_size is small or 0 since it'd otherwise use MergeExternal if (A.length() == 0) continue; } else if (pull[0].from < pull[0].to) { B.end -= pull[0].count; if (B.length() == 0) continue; } } if (start == pull[1].range.start) { if (pull[1].from > pull[1].to) { A.start += pull[1].count; if (A.length() == 0) continue; } else if (pull[1].from < pull[1].to) { B.end -= pull[1].count; if (B.length() == 0) continue; } } if (comp.compare(array[B.end - 1], array[A.start]) < 0) { // the two ranges are in reverse order, so a simple rotation should fix it Rotate(array, A.length(), new Range(A.start, B.end), true); } else if (comp.compare(array[A.end], array[A.end - 1]) < 0) { // these two ranges weren't already in order, so we'll need to merge them! // break the remainder of A into blocks. firstA is the uneven-sized first A block blockA.set(A.start, A.end); firstA.set(A.start, A.start + blockA.length() % block_size); // swap the first value of each A block with the value in buffer1 int indexA = buffer1.start; for (index = firstA.end; index < blockA.end; index += block_size) { T swap = array[indexA]; array[indexA] = array[index]; array[index] = swap; indexA++; } // start rolling the A blocks through the B blocks! // whenever we leave an A block behind, we'll need to merge the previous A block with any B blocks that follow it, so track that information as well lastA.set(firstA.start, firstA.end); lastB.set(0, 0); blockB.set(B.start, B.start + Math.min(block_size, B.length())); blockA.start += firstA.length(); indexA = buffer1.start; // if the first unevenly sized A block fits into the cache, copy it there for when we go to Merge it // otherwise, if the second buffer is available, block swap the contents into that if (lastA.length() <= cache_size && cache != null) java.lang.System.arraycopy(array, lastA.start, cache, 0, lastA.length()); else if (buffer2.length() > 0) BlockSwap(array, lastA.start, buffer2.start, lastA.length()); if (blockA.length() > 0) { while (true) { // if there's a previous B block and the first value of the minimum A block is <= the last value of the previous B block, // then drop that minimum A block behind. or if there are no B blocks left then keep dropping the remaining A blocks. if ((lastB.length() > 0 && comp.compare(array[lastB.end - 1], array[indexA]) >= 0) || blockB.length() == 0) { // figure out where to split the previous B block, and rotate it at the split int B_split = BinaryFirst(array, array[indexA], lastB, comp); int B_remaining = lastB.end - B_split; // swap the minimum A block to the beginning of the rolling A blocks int minA = blockA.start; for (int findA = minA + block_size; findA < blockA.end; findA += block_size) if (comp.compare(array[findA], array[minA]) < 0) minA = findA; BlockSwap(array, blockA.start, minA, block_size); // swap the first item of the previous A block back with its original value, which is stored in buffer1 T swap = array[blockA.start]; array[blockA.start] = array[indexA]; array[indexA] = swap; indexA++; // locally merge the previous A block with the B values that follow it // if lastA fits into the external cache we'll use that (with MergeExternal), // or if the second internal buffer exists we'll use that (with MergeInternal), // or failing that we'll use a strictly in-place merge algorithm (MergeInPlace) if (lastA.length() <= cache_size) MergeExternal(array, lastA, new Range(lastA.end, B_split), comp); else if (buffer2.length() > 0) MergeInternal(array, lastA, new Range(lastA.end, B_split), comp, buffer2); else MergeInPlace(array, lastA, new Range(lastA.end, B_split), comp); if (buffer2.length() > 0 || block_size <= cache_size) { // copy the previous A block into the cache or buffer2, since that's where we need it to be when we go to merge it anyway if (block_size <= cache_size) java.lang.System.arraycopy(array, blockA.start, cache, 0, block_size); else BlockSwap(array, blockA.start, buffer2.start, block_size); // this is equivalent to rotating, but faster // the area normally taken up by the A block is either the contents of buffer2, or data we don't need anymore since we memcopied it // either way, we don't need to retain the order of those items, so instead of rotating we can just block swap B to where it belongs BlockSwap(array, B_split, blockA.start + block_size - B_remaining, B_remaining); } else { // we are unable to use the 'buffer2' trick to speed up the rotation operation since buffer2 doesn't exist, so perform a normal rotation Rotate(array, blockA.start - B_split, new Range(B_split, blockA.start + block_size), true); } // update the range for the remaining A blocks, and the range remaining from the B block after it was split lastA.set(blockA.start - B_remaining, blockA.start - B_remaining + block_size); lastB.set(lastA.end, lastA.end + B_remaining); // if there are no more A blocks remaining, this step is finished! blockA.start += block_size; if (blockA.length() == 0) break; } else if (blockB.length() < block_size) { // move the last B block, which is unevenly sized, to before the remaining A blocks, by using a rotation // the cache is disabled here since it might contain the contents of the previous A block Rotate(array, -blockB.length(), new Range(blockA.start, blockB.end), false); lastB.set(blockA.start, blockA.start + blockB.length()); blockA.start += blockB.length(); blockA.end += blockB.length(); blockB.end = blockB.start; } else { // roll the leftmost A block to the end by swapping it with the next B block BlockSwap(array, blockA.start, blockB.start, block_size); lastB.set(blockA.start, blockA.start + block_size); blockA.start += block_size; blockA.end += block_size; blockB.start += block_size; blockB.end += block_size; if (blockB.end > B.end) blockB.end = B.end; } } } // merge the last A block with the remaining B values if (lastA.length() <= cache_size) MergeExternal(array, lastA, new Range(lastA.end, B.end), comp); else if (buffer2.length() > 0) MergeInternal(array, lastA, new Range(lastA.end, B.end), comp, buffer2); else MergeInPlace(array, lastA, new Range(lastA.end, B.end), comp); } } // when we're finished with this merge step we should have the one or two internal buffers left over, where the second buffer is all jumbled up // insertion sort the second buffer, then redistribute the buffers back into the array using the opposite process used for creating the buffer // while an unstable sort like quick sort could be applied here, in benchmarks it was consistently slightly slower than a simple insertion sort, // even for tens of millions of items. this may be because insertion sort is quite fast when the data is already somewhat sorted, like it is here InsertionSort(array, buffer2, comp); for (pull_index = 0; pull_index < 2; pull_index++) { int unique = pull[pull_index].count * 2; if (pull[pull_index].from > pull[pull_index].to) { // the values were pulled out to the left, so redistribute them back to the right Range buffer = new Range(pull[pull_index].range.start, pull[pull_index].range.start + pull[pull_index].count); while (buffer.length() > 0) { index = FindFirstForward(array, array[buffer.start], new Range(buffer.end, pull[pull_index].range.end), comp, unique); int amount = index - buffer.end; Rotate(array, buffer.length(), new Range(buffer.start, index), true); buffer.start += (amount + 1); buffer.end += amount; unique -= 2; } } else if (pull[pull_index].from < pull[pull_index].to) { // the values were pulled out to the right, so redistribute them back to the left Range buffer = new Range(pull[pull_index].range.end - pull[pull_index].count, pull[pull_index].range.end); while (buffer.length() > 0) { index = FindLastBackward(array, array[buffer.end - 1], new Range(pull[pull_index].range.start, buffer.start), comp, unique); int amount = buffer.start - index; Rotate(array, amount, new Range(index, buffer.end), true); buffer.start -= amount; buffer.end -= (amount + 1); unique -= 2; } } } } // double the size of each A and B subarray that will be merged in the next level if (!iterator.nextLevel()) break; } } } class MergeSorter { // n^2 sorting algorithm used to sort tiny chunks of the full array void InsertionSort(T array[], Range range, Comparator comp) { for (int i = range.start + 1; i < range.end; i++) { T temp = array[i]; int j; for (j = i; j > range.start && comp.compare(temp, array[j - 1]) < 0; j--) array[j] = array[j - 1]; array[j] = temp; } } // standard merge sort, so we have a baseline for how well WikiSort works void SortR(T array[], Range range, Comparator comp, T buffer[]) { if (range.length() < 32) { // insertion sort InsertionSort(array, range, comp); return; } int mid = range.start + (range.end - range.start)/2; Range A = new Range(range.start, mid); Range B = new Range(mid, range.end); SortR(array, A, comp, buffer); SortR(array, B, comp, buffer); // standard merge operation here (only A is copied to the buffer) java.lang.System.arraycopy(array, A.start, buffer, 0, A.length()); int A_count = 0, B_count = 0, insert = 0; while (A_count < A.length() && B_count < B.length()) { if (comp.compare(array[A.end + B_count], buffer[A_count]) >= 0) { array[A.start + insert] = buffer[A_count]; A_count++; } else { array[A.start + insert] = array[A.end + B_count]; B_count++; } insert++; } java.lang.System.arraycopy(buffer, A_count, array, A.start + insert, A.length() - A_count); } void Sort(T array[], Comparator comp) { @SuppressWarnings("unchecked") T[] buffer = (T[]) new Object[array.length]; SortR(array, new Range(0, array.length), comp, buffer); } public static void sort(T[] array, Comparator comp) { new MergeSorter ().Sort(array, comp); } } class SortRandom { public static Random rand; public static int nextInt(int max) { // set the seed on the random number generator if (rand == null) rand = new Random(); return rand.nextInt(max); } public static int nextInt() { return nextInt(2147483647); } } class Testing { int value(int index, int total) { return index; } } class TestingRandom extends Testing { int value(int index, int total) { return SortRandom.nextInt(); } } class TestingRandomFew extends Testing { int value(int index, int total) { return SortRandom.nextInt(100); } } class TestingMostlyDescending extends Testing { int value(int index, int total) { return total - index + SortRandom.nextInt(5) - 2; } } class TestingMostlyAscending extends Testing { int value(int index, int total) { return index + SortRandom.nextInt(5) - 2; } } class TestingAscending extends Testing { int value(int index, int total) { return index; } } class TestingDescending extends Testing { int value(int index, int total) { return total - index; } } class TestingEqual extends Testing { int value(int index, int total) { return 1000; } } class TestingJittered extends Testing { int value(int index, int total) { return (SortRandom.nextInt(100) <= 90) ? index : (index - 2); } } class TestingMostlyEqual extends Testing { int value(int index, int total) { return 1000 + SortRandom.nextInt(4); } } // the last 1/5 of the data is random class TestingAppend extends Testing { int value(int index, int total) { if (index > total - total/5) return SortRandom.nextInt(total); return index; } } class WikiSort { static double Seconds() { return System.currentTimeMillis()/1000.0; } // make sure the items within the given range are in a stable order // if you want to test the correctness of any changes you make to the main WikiSort function, // call it from within WikiSort after each step static void Verify(Test array[], Range range, TestComparator comp, String msg) { for (int index = range.start + 1; index < range.end; index++) { // if it's in ascending order then we're good // if both values are equal, we need to make sure the index values are ascending if (!(comp.compare(array[index - 1], array[index]) < 0 || (comp.compare(array[index], array[index - 1]) == 0 && array[index].index > array[index - 1].index))) { //for (int index2 = range.start; index2 < range.end; index2++) // System.out.println(array[index2].value + " (" + array[index2].index + ")"); System.out.println("failed with message: " + msg); throw new RuntimeException(); } } } public static void main (String[] args) throws java.lang.Exception { int max_size = 1500000; TestComparator comp = new TestComparator(); Test[] array1; Test[] array2; int compares1, compares2, total_compares1 = 0, total_compares2 = 0; Testing[] test_cases = { new TestingRandom(), new TestingRandomFew(), new TestingMostlyDescending(), new TestingMostlyAscending(), new TestingAscending(), new TestingDescending(), new TestingEqual(), new TestingJittered(), new TestingMostlyEqual(), new TestingAppend() }; WikiSorter Wiki = new WikiSorter (); MergeSorter Merge = new MergeSorter (); System.out.println("running test cases..."); int total = max_size; array1 = new Test[total]; array2 = new Test[total]; for (int test_case = 0; test_case < test_cases.length; test_case++) { for (int index = 0; index < total; index++) { Test item = new Test(); item.value = test_cases[test_case].value(index, total); item.index = index; array1[index] = item; array2[index] = item; } Wiki.Sort(array1, comp); Merge.Sort(array2, comp); Verify(array1, new Range(0, total), comp, "test case failed"); for (int index = 0; index < total; index++) { if (comp.compare(array1[index], array2[index]) != 0) throw new Exception(); if (array2[index].index != array1[index].index) throw new Exception(); } } System.out.println("passed!"); double total_time = Seconds(); double total_time1 = 0, total_time2 = 0; for (total = 0; total < max_size; total += 2048 * 16) { array1 = new Test[total]; array2 = new Test[total]; for (int index = 0; index < total; index++) { Test item = new Test(); item.value = SortRandom.nextInt(); item.index = index; array1[index] = item; array2[index] = item; } double time1 = Seconds(); TestComparator.comparisons = 0; Wiki.Sort(array1, comp); time1 = Seconds() - time1; total_time1 += time1; compares1 = TestComparator.comparisons; total_compares1 += compares1; double time2 = Seconds(); TestComparator.comparisons = 0; Merge.Sort(array2, comp); time2 = Seconds() - time2; total_time2 += time2; compares2 = TestComparator.comparisons; total_compares2 += compares2; System.out.format("[%d]\n", total); if (time1 >= time2) System.out.format("WikiSort: %f seconds, MergeSort: %f seconds (%f%% as fast)\n", time1, time2, time2/time1 * 100.0); else System.out.format("WikiSort: %f seconds, MergeSort: %f seconds (%f%% faster)\n", time1, time2, time2/time1 * 100.0 - 100.0); if (compares1 <= compares2) System.out.format("WikiSort: %d compares, MergeSort: %d compares (%f%% as many)\n", compares1, compares2, compares1 * 100.0/compares2); else System.out.format("WikiSort: %d compares, MergeSort: %d compares (%f%% more)\n", compares1, compares2, compares1 * 100.0/compares2 - 100.0); // make sure the arrays are sorted correctly, and that the results were stable System.out.println("verifying..."); Verify(array1, new Range(0, total), comp, "testing the final array"); for (int index = 0; index < total; index++) { if (comp.compare(array1[index], array2[index]) != 0) throw new Exception(); if (array2[index].index != array1[index].index) throw new Exception(); } System.out.println("correct!"); } total_time = Seconds() - total_time; System.out.format("tests completed in %f seconds\n", total_time); if (total_time1 >= total_time2) System.out.format("WikiSort: %f seconds, MergeSort: %f seconds (%f%% as fast)\n", total_time1, total_time2, total_time2/total_time1 * 100.0); else System.out.format("WikiSort: %f seconds, MergeSort: %f seconds (%f%% faster)\n", total_time1, total_time2, total_time2/total_time1 * 100.0 - 100.0); if (total_compares1 <= total_compares2) System.out.format("WikiSort: %d compares, MergeSort: %d compares (%f%% as many)\n", total_compares1, total_compares2, total_compares1 * 100.0/total_compares2); else System.out.format("WikiSort: %d compares, MergeSort: %d compares (%f%% more)\n", total_compares1, total_compares2, total_compares1 * 100.0/total_compares2 - 100.0); } }
/*********************************************************** WikiSort (public domain license) https://github.com/BonzaiThePenguin/WikiSort to run: clang -o WikiSort.x WikiSort.c -O3 (or replace 'clang' with 'gcc') ./WikiSort.x ***********************************************************/ #include#include #include #include #include #include #include #include #include /* record the number of comparisons */ /* note that this reduces WikiSort's performance when enabled */ #define PROFILE false /* verify that WikiSort is actually correct */ /* (this also reduces performance slightly) */ #define VERIFY false /* simulate comparisons that have a bit more overhead than just an inlined (int < int) */ /* (so we can tell whether reducing the number of comparisons was worth the added complexity) */ #define SLOW_COMPARISONS false /* whether to give WikiSort a full-size cache, to see how it performs when given more memory */ #define DYNAMIC_CACHE false double Seconds() { return clock() * 1.0/CLOCKS_PER_SEC; } /* various #defines for the C code */ #ifndef true #define true 1 #define false 0 typedef uint8_t bool; #endif #define Var(name, value) __typeof__(value) name = value #define Allocate(type, count) (type *)malloc((count) * sizeof(type)) size_t Min(const size_t a, const size_t b) { if (a < b) return a; return b; } size_t Max(const size_t a, const size_t b) { if (a > b) return a; return b; } /* structure to test stable sorting (index will contain its original index in the array, to make sure it doesn't switch places with other items) */ typedef struct { size_t value; #if VERIFY size_t index; #endif } Test; #if PROFILE /* global for testing how many comparisons are performed for each sorting algorithm */ size_t comparisons; #endif #if SLOW_COMPARISONS #define NOOP_SIZE 50 size_t noop1[NOOP_SIZE], noop2[NOOP_SIZE]; #endif bool TestCompare(Test item1, Test item2) { #if SLOW_COMPARISONS /* test slow comparisons by adding some fake overhead */ /* (in real-world use this might be string comparisons, etc.) */ size_t index; for (index = 0; index < NOOP_SIZE; index++) noop1[index] = noop2[index]; #endif #if PROFILE comparisons++; #endif return (item1.value < item2.value); } typedef bool (*Comparison)(Test, Test); /* structure to represent ranges within the array */ typedef struct { size_t start; size_t end; } Range; size_t Range_length(Range range) { return range.end - range.start; } Range Range_new(const size_t start, const size_t end) { Range range; range.start = start; range.end = end; return range; } /* toolbox functions used by the sorter */ /* swap value1 and value2 */ #define Swap(value1, value2) { \ Var(a, &(value1)); \ Var(b, &(value2)); \ \ Var(c, *a); \ *a = *b; \ *b = c; \ } /* 63 -> 32, 64 -> 64, etc. */ /* this comes from Hacker's Delight */ size_t FloorPowerOfTwo (const size_t value) { size_t x = value; x = x | (x >> 1); x = x | (x >> 2); x = x | (x >> 4); x = x | (x >> 8); x = x | (x >> 16); #if __LP64__ x = x | (x >> 32); #endif return x - (x >> 1); } /* find the index of the first value within the range that is equal to array[index] */ size_t BinaryFirst(const Test array[], const Test value, const Range range, const Comparison compare) { size_t start = range.start, end = range.end - 1; if (range.start >= range.end) return range.start; while (start < end) { size_t mid = start + (end - start)/2; if (compare(array[mid], value)) start = mid + 1; else end = mid; } if (start == range.end - 1 && compare(array[start], value)) start++; return start; } /* find the index of the last value within the range that is equal to array[index], plus 1 */ size_t BinaryLast(const Test array[], const Test value, const Range range, const Comparison compare) { size_t start = range.start, end = range.end - 1; if (range.start >= range.end) return range.end; while (start < end) { size_t mid = start + (end - start)/2; if (!compare(value, array[mid])) start = mid + 1; else end = mid; } if (start == range.end - 1 && !compare(value, array[start])) start++; return start; } /* combine a linear search with a binary search to reduce the number of comparisons in situations */ /* where have some idea as to how many unique values there are and where the next value might be */ size_t FindFirstForward(const Test array[], const Test value, const Range range, const Comparison compare, const size_t unique) { size_t skip, index; if (Range_length(range) == 0) return range.start; skip = Max(Range_length(range)/unique, 1); for (index = range.start + skip; compare(array[index - 1], value); index += skip) if (index >= range.end - skip) return BinaryFirst(array, value, Range_new(index, range.end), compare); return BinaryFirst(array, value, Range_new(index - skip, index), compare); } size_t FindLastForward(const Test array[], const Test value, const Range range, const Comparison compare, const size_t unique) { size_t skip, index; if (Range_length(range) == 0) return range.start; skip = Max(Range_length(range)/unique, 1); for (index = range.start + skip; !compare(value, array[index - 1]); index += skip) if (index >= range.end - skip) return BinaryLast(array, value, Range_new(index, range.end), compare); return BinaryLast(array, value, Range_new(index - skip, index), compare); } size_t FindFirstBackward(const Test array[], const Test value, const Range range, const Comparison compare, const size_t unique) { size_t skip, index; if (Range_length(range) == 0) return range.start; skip = Max(Range_length(range)/unique, 1); for (index = range.end - skip; index > range.start && !compare(array[index - 1], value); index -= skip) if (index < range.start + skip) return BinaryFirst(array, value, Range_new(range.start, index), compare); return BinaryFirst(array, value, Range_new(index, index + skip), compare); } size_t FindLastBackward(const Test array[], const Test value, const Range range, const Comparison compare, const size_t unique) { size_t skip, index; if (Range_length(range) == 0) return range.start; skip = Max(Range_length(range)/unique, 1); for (index = range.end - skip; index > range.start && compare(value, array[index - 1]); index -= skip) if (index < range.start + skip) return BinaryLast(array, value, Range_new(range.start, index), compare); return BinaryLast(array, value, Range_new(index, index + skip), compare); } /* n^2 sorting algorithm used to sort tiny chunks of the full array */ void InsertionSort(Test array[], const Range range, const Comparison compare) { size_t i, j; for (i = range.start + 1; i < range.end; i++) { const Test temp = array[i]; for (j = i; j > range.start && compare(temp, array[j - 1]); j--) array[j] = array[j - 1]; array[j] = temp; } } /* reverse a range of values within the array */ void Reverse(Test array[], const Range range) { size_t index; for (index = Range_length(range)/2; index > 0; index--) Swap(array[range.start + index - 1], array[range.end - index]); } /* swap a series of values in the array */ void BlockSwap(Test array[], const size_t start1, const size_t start2, const size_t block_size) { size_t index; for (index = 0; index < block_size; index++) Swap(array[start1 + index], array[start2 + index]); } /* rotate the values in an array ([0 1 2 3] becomes [1 2 3 0] if we rotate by 1) */ /* this assumes that 0 <= amount <= range.length() */ void Rotate(Test array[], const size_t amount, const Range range, Test cache[], const size_t cache_size) { size_t split; Range range1, range2; if (Range_length(range) == 0) return; split = range.start + amount; range1 = Range_new(range.start, split); range2 = Range_new(split, range.end); /* if the smaller of the two ranges fits into the cache, it's *slightly* faster copying it there and shifting the elements over */ if (Range_length(range1) <= Range_length(range2)) { if (Range_length(range1) <= cache_size) { memcpy(&cache[0], &array[range1.start], Range_length(range1) * sizeof(array[0])); memmove(&array[range1.start], &array[range2.start], Range_length(range2) * sizeof(array[0])); memcpy(&array[range1.start + Range_length(range2)], &cache[0], Range_length(range1) * sizeof(array[0])); return; } } else { if (Range_length(range2) <= cache_size) { memcpy(&cache[0], &array[range2.start], Range_length(range2) * sizeof(array[0])); memmove(&array[range2.end - Range_length(range1)], &array[range1.start], Range_length(range1) * sizeof(array[0])); memcpy(&array[range1.start], &cache[0], Range_length(range2) * sizeof(array[0])); return; } } Reverse(array, range1); Reverse(array, range2); Reverse(array, range); } /* calculate how to scale the index value to the range within the array */ /* the bottom-up merge sort only operates on values that are powers of two, */ /* so scale down to that power of two, then use a fraction to scale back again */ typedef struct { size_t size, power_of_two; size_t numerator, decimal; size_t denominator, decimal_step, numerator_step; } WikiIterator; void WikiIterator_begin(WikiIterator *me) { me->numerator = me->decimal = 0; } Range WikiIterator_nextRange(WikiIterator *me) { size_t start = me->decimal; me->decimal += me->decimal_step; me->numerator += me->numerator_step; if (me->numerator >= me->denominator) { me->numerator -= me->denominator; me->decimal++; } return Range_new(start, me->decimal); } bool WikiIterator_finished(WikiIterator *me) { return (me->decimal >= me->size); } bool WikiIterator_nextLevel(WikiIterator *me) { me->decimal_step += me->decimal_step; me->numerator_step += me->numerator_step; if (me->numerator_step >= me->denominator) { me->numerator_step -= me->denominator; me->decimal_step++; } return (me->decimal_step < me->size); } size_t WikiIterator_length(WikiIterator *me) { return me->decimal_step; } WikiIterator WikiIterator_new(size_t size2, size_t min_level) { WikiIterator me; me.size = size2; me.power_of_two = FloorPowerOfTwo(me.size); me.denominator = me.power_of_two/min_level; me.numerator_step = me.size % me.denominator; me.decimal_step = me.size/me.denominator; WikiIterator_begin(&me); return me; } /* merge two ranges from one array and save the results into a different array */ void MergeInto(Test from[], const Range A, const Range B, const Comparison compare, Test into[]) { Test *A_index = &from[A.start], *B_index = &from[B.start]; Test *A_last = &from[A.end], *B_last = &from[B.end]; Test *insert_index = &into[0]; while (true) { if (!compare(*B_index, *A_index)) { *insert_index = *A_index; A_index++; insert_index++; if (A_index == A_last) { /* copy the remainder of B into the final array */ memcpy(insert_index, B_index, (B_last - B_index) * sizeof(from[0])); break; } } else { *insert_index = *B_index; B_index++; insert_index++; if (B_index == B_last) { /* copy the remainder of A into the final array */ memcpy(insert_index, A_index, (A_last - A_index) * sizeof(from[0])); break; } } } } /* merge operation using an external buffer, */ void MergeExternal(Test array[], const Range A, const Range B, const Comparison compare, Test cache[]) { /* A fits into the cache, so use that instead of the internal buffer */ Test *A_index = &cache[0]; Test *B_index = &array[B.start]; Test *insert_index = &array[A.start]; Test *A_last = &cache[Range_length(A)]; Test *B_last = &array[B.end]; if (Range_length(B) > 0 && Range_length(A) > 0) { while (true) { if (!compare(*B_index, *A_index)) { *insert_index = *A_index; A_index++; insert_index++; if (A_index == A_last) break; } else { *insert_index = *B_index; B_index++; insert_index++; if (B_index == B_last) break; } } } /* copy the remainder of A into the final array */ memcpy(insert_index, A_index, (A_last - A_index) * sizeof(array[0])); } /* merge operation using an internal buffer */ void MergeInternal(Test array[], const Range A, const Range B, const Comparison compare, const Range buffer) { /* whenever we find a value to add to the final array, swap it with the value that's already in that spot */ /* when this algorithm is finished, 'buffer' will contain its original contents, but in a different order */ size_t A_count = 0, B_count = 0, insert = 0; if (Range_length(B) > 0 && Range_length(A) > 0) { while (true) { if (!compare(array[B.start + B_count], array[buffer.start + A_count])) { Swap(array[A.start + insert], array[buffer.start + A_count]); A_count++; insert++; if (A_count >= Range_length(A)) break; } else { Swap(array[A.start + insert], array[B.start + B_count]); B_count++; insert++; if (B_count >= Range_length(B)) break; } } } /* swap the remainder of A into the final array */ BlockSwap(array, buffer.start + A_count, A.start + insert, Range_length(A) - A_count); } /* merge operation without a buffer */ void MergeInPlace(Test array[], Range A, Range B, const Comparison compare, Test cache[], const size_t cache_size) { if (Range_length(A) == 0 || Range_length(B) == 0) return; /* this just repeatedly binary searches into B and rotates A into position. the paper suggests using the 'rotation-based Hwang and Lin algorithm' here, but I decided to stick with this because it had better situational performance (Hwang and Lin is designed for merging subarrays of very different sizes, but WikiSort almost always uses subarrays that are roughly the same size) normally this is incredibly suboptimal, but this function is only called when none of the A or B blocks in any subarray contained 2√A unique values, which places a hard limit on the number of times this will ACTUALLY need to binary search and rotate. according to my analysis the worst case is √A rotations performed on √A items once the constant factors are removed, which ends up being O(n) again, this is NOT a general-purpose solution – it only works well in this case! kind of like how the O(n^2) insertion sort is used in some places */ while (true) { /* find the first place in B where the first item in A needs to be inserted */ size_t mid = BinaryFirst(array, array[A.start], B, compare); /* rotate A into place */ size_t amount = mid - A.end; Rotate(array, Range_length(A), Range_new(A.start, mid), cache, cache_size); if (B.end == mid) break; /* calculate the new A and B ranges */ B.start = mid; A = Range_new(A.start + amount, B.start); A.start = BinaryLast(array, array[A.start], A, compare); if (Range_length(A) == 0) break; } } /* bottom-up merge sort combined with an in-place merge algorithm for O(1) memory use */ void WikiSort(Test array[], const size_t size, const Comparison compare) { /* use a small cache to speed up some of the operations */ #if DYNAMIC_CACHE size_t cache_size; Test *cache = NULL; #else /* since the cache size is fixed, it's still O(1) memory! */ /* just keep in mind that making it too small ruins the point (nothing will fit into it), */ /* and making it too large also ruins the point (so much for "low memory"!) */ /* removing the cache entirely still gives 70% of the performance of a standard merge */ #define CACHE_SIZE 512 const size_t cache_size = CACHE_SIZE; Test cache[CACHE_SIZE]; #endif WikiIterator iterator; /* if the array is of size 0, 1, 2, or 3, just sort them like so: */ if (size < 4) { if (size == 3) { /* hard-coded insertion sort */ if (compare(array[1], array[0])) Swap(array[0], array[1]); if (compare(array[2], array[1])) { Swap(array[1], array[2]); if (compare(array[1], array[0])) Swap(array[0], array[1]); } } else if (size == 2) { /* swap the items if they're out of order */ if (compare(array[1], array[0])) Swap(array[0], array[1]); } return; } /* sort groups of 4-8 items at a time using an unstable sorting network, */ /* but keep track of the original item orders to force it to be stable */ /* http://pages.ripco.net/~jgamble/nw.html */ iterator = WikiIterator_new(size, 4); WikiIterator_begin(&iterator); while (!WikiIterator_finished(&iterator)) { uint8_t order[] = { 0, 1, 2, 3, 4, 5, 6, 7 }; Range range = WikiIterator_nextRange(&iterator); #define SWAP(x, y) if (compare(array[range.start + y], array[range.start + x]) || \ (order[x] > order[y] && !compare(array[range.start + x], array[range.start + y]))) { \ Swap(array[range.start + x], array[range.start + y]); Swap(order[x], order[y]); } if (Range_length(range) == 8) { SWAP(0, 1); SWAP(2, 3); SWAP(4, 5); SWAP(6, 7); SWAP(0, 2); SWAP(1, 3); SWAP(4, 6); SWAP(5, 7); SWAP(1, 2); SWAP(5, 6); SWAP(0, 4); SWAP(3, 7); SWAP(1, 5); SWAP(2, 6); SWAP(1, 4); SWAP(3, 6); SWAP(2, 4); SWAP(3, 5); SWAP(3, 4); } else if (Range_length(range) == 7) { SWAP(1, 2); SWAP(3, 4); SWAP(5, 6); SWAP(0, 2); SWAP(3, 5); SWAP(4, 6); SWAP(0, 1); SWAP(4, 5); SWAP(2, 6); SWAP(0, 4); SWAP(1, 5); SWAP(0, 3); SWAP(2, 5); SWAP(1, 3); SWAP(2, 4); SWAP(2, 3); } else if (Range_length(range) == 6) { SWAP(1, 2); SWAP(4, 5); SWAP(0, 2); SWAP(3, 5); SWAP(0, 1); SWAP(3, 4); SWAP(2, 5); SWAP(0, 3); SWAP(1, 4); SWAP(2, 4); SWAP(1, 3); SWAP(2, 3); } else if (Range_length(range) == 5) { SWAP(0, 1); SWAP(3, 4); SWAP(2, 4); SWAP(2, 3); SWAP(1, 4); SWAP(0, 3); SWAP(0, 2); SWAP(1, 3); SWAP(1, 2); } else if (Range_length(range) == 4) { SWAP(0, 1); SWAP(2, 3); SWAP(0, 2); SWAP(1, 3); SWAP(1, 2); } } if (size < 8) return; #if DYNAMIC_CACHE /* good choices for the cache size are: */ /* (size + 1)/2 – turns into a full-speed standard merge sort since everything fits into the cache */ cache_size = (size + 1)/2; cache = (Test *)malloc(cache_size * sizeof(array[0])); if (!cache) { /* sqrt((size + 1)/2) + 1 – this will be the size of the A blocks at the largest level of merges, */ /* so a buffer of this size would allow it to skip using internal or in-place merges for anything */ cache_size = sqrt(cache_size) + 1; cache = (Test *)malloc(cache_size * sizeof(array[0])); if (!cache) { /* 512 – chosen from careful testing as a good balance between fixed-size memory use and run time */ if (cache_size > 512) { cache_size = 512; cache = (Test *)malloc(cache_size * sizeof(array[0])); } /* 0 – if the system simply cannot allocate any extra memory whatsoever, no memory works just fine */ if (!cache) cache_size = 0; } } #endif /* then merge sort the higher levels, which can be 8-15, 16-31, 32-63, 64-127, etc. */ while (true) { /* if every A and B block will fit into the cache, use a special branch specifically for merging with the cache */ /* (we use < rather than <= since the block size might be one more than iterator.length()) */ if (WikiIterator_length(&iterator) < cache_size) { /* if four subarrays fit into the cache, it's faster to merge both pairs of subarrays into the cache, */ /* then merge the two merged subarrays from the cache back into the original array */ if ((WikiIterator_length(&iterator) + 1) * 4 <= cache_size && WikiIterator_length(&iterator) * 4 <= size) { WikiIterator_begin(&iterator); while (!WikiIterator_finished(&iterator)) { /* merge A1 and B1 into the cache */ Range A1, B1, A2, B2, A3, B3; A1 = WikiIterator_nextRange(&iterator); B1 = WikiIterator_nextRange(&iterator); A2 = WikiIterator_nextRange(&iterator); B2 = WikiIterator_nextRange(&iterator); if (compare(array[B1.end - 1], array[A1.start])) { /* the two ranges are in reverse order, so copy them in reverse order into the cache */ memcpy(&cache[Range_length(B1)], &array[A1.start], Range_length(A1) * sizeof(array[0])); memcpy(&cache[0], &array[B1.start], Range_length(B1) * sizeof(array[0])); } else if (compare(array[B1.start], array[A1.end - 1])) { /* these two ranges weren't already in order, so merge them into the cache */ MergeInto(array, A1, B1, compare, &cache[0]); } else { /* if A1, B1, A2, and B2 are all in order, skip doing anything else */ if (!compare(array[B2.start], array[A2.end - 1]) && !compare(array[A2.start], array[B1.end - 1])) continue; /* copy A1 and B1 into the cache in the same order */ memcpy(&cache[0], &array[A1.start], Range_length(A1) * sizeof(array[0])); memcpy(&cache[Range_length(A1)], &array[B1.start], Range_length(B1) * sizeof(array[0])); } A1 = Range_new(A1.start, B1.end); /* merge A2 and B2 into the cache */ if (compare(array[B2.end - 1], array[A2.start])) { /* the two ranges are in reverse order, so copy them in reverse order into the cache */ memcpy(&cache[Range_length(A1) + Range_length(B2)], &array[A2.start], Range_length(A2) * sizeof(array[0])); memcpy(&cache[Range_length(A1)], &array[B2.start], Range_length(B2) * sizeof(array[0])); } else if (compare(array[B2.start], array[A2.end - 1])) { /* these two ranges weren't already in order, so merge them into the cache */ MergeInto(array, A2, B2, compare, &cache[Range_length(A1)]); } else { /* copy A2 and B2 into the cache in the same order */ memcpy(&cache[Range_length(A1)], &array[A2.start], Range_length(A2) * sizeof(array[0])); memcpy(&cache[Range_length(A1) + Range_length(A2)], &array[B2.start], Range_length(B2) * sizeof(array[0])); } A2 = Range_new(A2.start, B2.end); /* merge A1 and A2 from the cache into the array */ A3 = Range_new(0, Range_length(A1)); B3 = Range_new(Range_length(A1), Range_length(A1) + Range_length(A2)); if (compare(cache[B3.end - 1], cache[A3.start])) { /* the two ranges are in reverse order, so copy them in reverse order into the array */ memcpy(&array[A1.start + Range_length(A2)], &cache[A3.start], Range_length(A3) * sizeof(array[0])); memcpy(&array[A1.start], &cache[B3.start], Range_length(B3) * sizeof(array[0])); } else if (compare(cache[B3.start], cache[A3.end - 1])) { /* these two ranges weren't already in order, so merge them back into the array */ MergeInto(cache, A3, B3, compare, &array[A1.start]); } else { /* copy A3 and B3 into the array in the same order */ memcpy(&array[A1.start], &cache[A3.start], Range_length(A3) * sizeof(array[0])); memcpy(&array[A1.start + Range_length(A1)], &cache[B3.start], Range_length(B3) * sizeof(array[0])); } } /* we merged two levels at the same time, so we're done with this level already */ /* (iterator.nextLevel() is called again at the bottom of this outer merge loop) */ WikiIterator_nextLevel(&iterator); } else { WikiIterator_begin(&iterator); while (!WikiIterator_finished(&iterator)) { Range A = WikiIterator_nextRange(&iterator); Range B = WikiIterator_nextRange(&iterator); if (compare(array[B.end - 1], array[A.start])) { /* the two ranges are in reverse order, so a simple rotation should fix it */ Rotate(array, Range_length(A), Range_new(A.start, B.end), cache, cache_size); } else if (compare(array[B.start], array[A.end - 1])) { /* these two ranges weren't already in order, so we'll need to merge them! */ memcpy(&cache[0], &array[A.start], Range_length(A) * sizeof(array[0])); MergeExternal(array, A, B, compare, cache); } } } } else { /* this is where the in-place merge logic starts! 1. pull out two internal buffers each containing √A unique values 1a. adjust block_size and buffer_size if we couldn't find enough unique values 2. loop over the A and B subarrays within this level of the merge sort 3. break A and B into blocks of size 'block_size' 4. "tag" each of the A blocks with values from the first internal buffer 5. roll the A blocks through the B blocks and drop/rotate them where they belong 6. merge each A block with any B values that follow, using the cache or the second internal buffer 7. sort the second internal buffer if it exists 8. redistribute the two internal buffers back into the array */ size_t block_size = sqrt(WikiIterator_length(&iterator)); size_t buffer_size = WikiIterator_length(&iterator)/block_size + 1; /* as an optimization, we really only need to pull out the internal buffers once for each level of merges */ /* after that we can reuse the same buffers over and over, then redistribute it when we're finished with this level */ Range buffer1, buffer2, A, B; bool find_separately; size_t index, last, count, find, start, pull_index = 0; struct { size_t from, to, count; Range range; } pull[2]; pull[0].from = pull[0].to = pull[0].count = 0; pull[0].range = Range_new(0, 0); pull[1].from = pull[1].to = pull[1].count = 0; pull[1].range = Range_new(0, 0); buffer1 = Range_new(0, 0); buffer2 = Range_new(0, 0); /* find two internal buffers of size 'buffer_size' each */ find = buffer_size + buffer_size; find_separately = false; if (block_size <= cache_size) { /* if every A block fits into the cache then we won't need the second internal buffer, */ /* so we really only need to find 'buffer_size' unique values */ find = buffer_size; } else if (find > WikiIterator_length(&iterator)) { /* we can't fit both buffers into the same A or B subarray, so find two buffers separately */ find = buffer_size; find_separately = true; } /* we need to find either a single contiguous space containing 2√A unique values (which will be split up into two buffers of size √A each), */ /* or we need to find one buffer of < 2√A unique values, and a second buffer of √A unique values, */ /* OR if we couldn't find that many unique values, we need the largest possible buffer we can get */ /* in the case where it couldn't find a single buffer of at least √A unique values, */ /* all of the Merge steps must be replaced by a different merge algorithm (MergeInPlace) */ WikiIterator_begin(&iterator); while (!WikiIterator_finished(&iterator)) { A = WikiIterator_nextRange(&iterator); B = WikiIterator_nextRange(&iterator); /* just store information about where the values will be pulled from and to, */ /* as well as how many values there are, to create the two internal buffers */ #define PULL(_to) \ pull[pull_index].range = Range_new(A.start, B.end); \ pull[pull_index].count = count; \ pull[pull_index].from = index; \ pull[pull_index].to = _to /* check A for the number of unique values we need to fill an internal buffer */ /* these values will be pulled out to the start of A */ for (last = A.start, count = 1; count < find; last = index, count++) { index = FindLastForward(array, array[last], Range_new(last + 1, A.end), compare, find - count); if (index == A.end) break; } index = last; if (count >= buffer_size) { /* keep track of the range within the array where we'll need to "pull out" these values to create the internal buffer */ PULL(A.start); pull_index = 1; if (count == buffer_size + buffer_size) { /* we were able to find a single contiguous section containing 2√A unique values, */ /* so this section can be used to contain both of the internal buffers we'll need */ buffer1 = Range_new(A.start, A.start + buffer_size); buffer2 = Range_new(A.start + buffer_size, A.start + count); break; } else if (find == buffer_size + buffer_size) { /* we found a buffer that contains at least √A unique values, but did not contain the full 2√A unique values, */ /* so we still need to find a second separate buffer of at least √A unique values */ buffer1 = Range_new(A.start, A.start + count); find = buffer_size; } else if (block_size <= cache_size) { /* we found the first and only internal buffer that we need, so we're done! */ buffer1 = Range_new(A.start, A.start + count); break; } else if (find_separately) { /* found one buffer, but now find the other one */ buffer1 = Range_new(A.start, A.start + count); find_separately = false; } else { /* we found a second buffer in an 'A' subarray containing √A unique values, so we're done! */ buffer2 = Range_new(A.start, A.start + count); break; } } else if (pull_index == 0 && count > Range_length(buffer1)) { /* keep track of the largest buffer we were able to find */ buffer1 = Range_new(A.start, A.start + count); PULL(A.start); } /* check B for the number of unique values we need to fill an internal buffer */ /* these values will be pulled out to the end of B */ for (last = B.end - 1, count = 1; count < find; last = index - 1, count++) { index = FindFirstBackward(array, array[last], Range_new(B.start, last), compare, find - count); if (index == B.start) break; } index = last; if (count >= buffer_size) { /* keep track of the range within the array where we'll need to "pull out" these values to create the internal buffer */ PULL(B.end); pull_index = 1; if (count == buffer_size + buffer_size) { /* we were able to find a single contiguous section containing 2√A unique values, */ /* so this section can be used to contain both of the internal buffers we'll need */ buffer1 = Range_new(B.end - count, B.end - buffer_size); buffer2 = Range_new(B.end - buffer_size, B.end); break; } else if (find == buffer_size + buffer_size) { /* we found a buffer that contains at least √A unique values, but did not contain the full 2√A unique values, */ /* so we still need to find a second separate buffer of at least √A unique values */ buffer1 = Range_new(B.end - count, B.end); find = buffer_size; } else if (block_size <= cache_size) { /* we found the first and only internal buffer that we need, so we're done! */ buffer1 = Range_new(B.end - count, B.end); break; } else if (find_separately) { /* found one buffer, but now find the other one */ buffer1 = Range_new(B.end - count, B.end); find_separately = false; } else { /* buffer2 will be pulled out from a 'B' subarray, so if the first buffer was pulled out from the corresponding 'A' subarray, */ /* we need to adjust the end point for that A subarray so it knows to stop redistributing its values before reaching buffer2 */ if (pull[0].range.start == A.start) pull[0].range.end -= pull[1].count; /* we found a second buffer in an 'B' subarray containing √A unique values, so we're done! */ buffer2 = Range_new(B.end - count, B.end); break; } } else if (pull_index == 0 && count > Range_length(buffer1)) { /* keep track of the largest buffer we were able to find */ buffer1 = Range_new(B.end - count, B.end); PULL(B.end); } } /* pull out the two ranges so we can use them as internal buffers */ for (pull_index = 0; pull_index < 2; pull_index++) { Range range; size_t length = pull[pull_index].count; if (pull[pull_index].to < pull[pull_index].from) { /* we're pulling the values out to the left, which means the start of an A subarray */ index = pull[pull_index].from; for (count = 1; count < length; count++) { index = FindFirstBackward(array, array[index - 1], Range_new(pull[pull_index].to, pull[pull_index].from - (count - 1)), compare, length - count); range = Range_new(index + 1, pull[pull_index].from + 1); Rotate(array, Range_length(range) - count, range, cache, cache_size); pull[pull_index].from = index + count; } } else if (pull[pull_index].to > pull[pull_index].from) { /* we're pulling values out to the right, which means the end of a B subarray */ index = pull[pull_index].from + 1; for (count = 1; count < length; count++) { index = FindLastForward(array, array[index], Range_new(index, pull[pull_index].to), compare, length - count); range = Range_new(pull[pull_index].from, index - 1); Rotate(array, count, range, cache, cache_size); pull[pull_index].from = index - 1 - count; } } } /* adjust block_size and buffer_size based on the values we were able to pull out */ buffer_size = Range_length(buffer1); block_size = WikiIterator_length(&iterator)/buffer_size + 1; /* the first buffer NEEDS to be large enough to tag each of the evenly sized A blocks, */ /* so this was originally here to test the math for adjusting block_size above */ /* assert((WikiIterator_length(&iterator) + 1)/block_size <= buffer_size); */ /* now that the two internal buffers have been created, it's time to merge each A+B combination at this level of the merge sort! */ WikiIterator_begin(&iterator); while (!WikiIterator_finished(&iterator)) { A = WikiIterator_nextRange(&iterator); B = WikiIterator_nextRange(&iterator); /* remove any parts of A or B that are being used by the internal buffers */ start = A.start; if (start == pull[0].range.start) { if (pull[0].from > pull[0].to) { A.start += pull[0].count; /* if the internal buffer takes up the entire A or B subarray, then there's nothing to merge */ /* this only happens for very small subarrays, like √4 = 2, 2 * (2 internal buffers) = 4, */ /* which also only happens when cache_size is small or 0 since it'd otherwise use MergeExternal */ if (Range_length(A) == 0) continue; } else if (pull[0].from < pull[0].to) { B.end -= pull[0].count; if (Range_length(B) == 0) continue; } } if (start == pull[1].range.start) { if (pull[1].from > pull[1].to) { A.start += pull[1].count; if (Range_length(A) == 0) continue; } else if (pull[1].from < pull[1].to) { B.end -= pull[1].count; if (Range_length(B) == 0) continue; } } if (compare(array[B.end - 1], array[A.start])) { /* the two ranges are in reverse order, so a simple rotation should fix it */ Rotate(array, Range_length(A), Range_new(A.start, B.end), cache, cache_size); } else if (compare(array[A.end], array[A.end - 1])) { /* these two ranges weren't already in order, so we'll need to merge them! */ Range blockA, firstA, lastA, lastB, blockB; size_t indexA, findA; /* break the remainder of A into blocks. firstA is the uneven-sized first A block */ blockA = Range_new(A.start, A.end); firstA = Range_new(A.start, A.start + Range_length(blockA) % block_size); /* swap the first value of each A block with the value in buffer1 */ for (indexA = buffer1.start, index = firstA.end; index < blockA.end; indexA++, index += block_size) Swap(array[indexA], array[index]); /* start rolling the A blocks through the B blocks! */ /* whenever we leave an A block behind, we'll need to merge the previous A block with any B blocks that follow it, so track that information as well */ lastA = firstA; lastB = Range_new(0, 0); blockB = Range_new(B.start, B.start + Min(block_size, Range_length(B))); blockA.start += Range_length(firstA); indexA = buffer1.start; /* if the first unevenly sized A block fits into the cache, copy it there for when we go to Merge it */ /* otherwise, if the second buffer is available, block swap the contents into that */ if (Range_length(lastA) <= cache_size) memcpy(&cache[0], &array[lastA.start], Range_length(lastA) * sizeof(array[0])); else if (Range_length(buffer2) > 0) BlockSwap(array, lastA.start, buffer2.start, Range_length(lastA)); if (Range_length(blockA) > 0) { while (true) { /* if there's a previous B block and the first value of the minimum A block is <= the last value of the previous B block, */ /* then drop that minimum A block behind. or if there are no B blocks left then keep dropping the remaining A blocks. */ if ((Range_length(lastB) > 0 && !compare(array[lastB.end - 1], array[indexA])) || Range_length(blockB) == 0) { /* figure out where to split the previous B block, and rotate it at the split */ size_t B_split = BinaryFirst(array, array[indexA], lastB, compare); size_t B_remaining = lastB.end - B_split; /* swap the minimum A block to the beginning of the rolling A blocks */ size_t minA = blockA.start; for (findA = minA + block_size; findA < blockA.end; findA += block_size) if (compare(array[findA], array[minA])) minA = findA; BlockSwap(array, blockA.start, minA, block_size); /* swap the first item of the previous A block back with its original value, which is stored in buffer1 */ Swap(array[blockA.start], array[indexA]); indexA++; /* locally merge the previous A block with the B values that follow it if lastA fits into the external cache we'll use that (with MergeExternal), or if the second internal buffer exists we'll use that (with MergeInternal), or failing that we'll use a strictly in-place merge algorithm (MergeInPlace) */ if (Range_length(lastA) <= cache_size) MergeExternal(array, lastA, Range_new(lastA.end, B_split), compare, cache); else if (Range_length(buffer2) > 0) MergeInternal(array, lastA, Range_new(lastA.end, B_split), compare, buffer2); else MergeInPlace(array, lastA, Range_new(lastA.end, B_split), compare, cache, cache_size); if (Range_length(buffer2) > 0 || block_size <= cache_size) { /* copy the previous A block into the cache or buffer2, since that's where we need it to be when we go to merge it anyway */ if (block_size <= cache_size) memcpy(&cache[0], &array[blockA.start], block_size * sizeof(array[0])); else BlockSwap(array, blockA.start, buffer2.start, block_size); /* this is equivalent to rotating, but faster */ /* the area normally taken up by the A block is either the contents of buffer2, or data we don't need anymore since we memcopied it */ /* either way, we don't need to retain the order of those items, so instead of rotating we can just block swap B to where it belongs */ BlockSwap(array, B_split, blockA.start + block_size - B_remaining, B_remaining); } else { /* we are unable to use the 'buffer2' trick to speed up the rotation operation since buffer2 doesn't exist, so perform a normal rotation */ Rotate(array, blockA.start - B_split, Range_new(B_split, blockA.start + block_size), cache, cache_size); } /* update the range for the remaining A blocks, and the range remaining from the B block after it was split */ lastA = Range_new(blockA.start - B_remaining, blockA.start - B_remaining + block_size); lastB = Range_new(lastA.end, lastA.end + B_remaining); /* if there are no more A blocks remaining, this step is finished! */ blockA.start += block_size; if (Range_length(blockA) == 0) break; } else if (Range_length(blockB) < block_size) { /* move the last B block, which is unevenly sized, to before the remaining A blocks, by using a rotation */ /* the cache is disabled here since it might contain the contents of the previous A block */ Rotate(array, blockB.start - blockA.start, Range_new(blockA.start, blockB.end), cache, 0); lastB = Range_new(blockA.start, blockA.start + Range_length(blockB)); blockA.start += Range_length(blockB); blockA.end += Range_length(blockB); blockB.end = blockB.start; } else { /* roll the leftmost A block to the end by swapping it with the next B block */ BlockSwap(array, blockA.start, blockB.start, block_size); lastB = Range_new(blockA.start, blockA.start + block_size); blockA.start += block_size; blockA.end += block_size; blockB.start += block_size; if (blockB.end > B.end - block_size) blockB.end = B.end; else blockB.end += block_size; } } } /* merge the last A block with the remaining B values */ if (Range_length(lastA) <= cache_size) MergeExternal(array, lastA, Range_new(lastA.end, B.end), compare, cache); else if (Range_length(buffer2) > 0) MergeInternal(array, lastA, Range_new(lastA.end, B.end), compare, buffer2); else MergeInPlace(array, lastA, Range_new(lastA.end, B.end), compare, cache, cache_size); } } /* when we're finished with this merge step we should have the one or two internal buffers left over, where the second buffer is all jumbled up */ /* insertion sort the second buffer, then redistribute the buffers back into the array using the opposite process used for creating the buffer */ /* while an unstable sort like quicksort could be applied here, in benchmarks it was consistently slightly slower than a simple insertion sort, */ /* even for tens of millions of items. this may be because insertion sort is quite fast when the data is already somewhat sorted, like it is here */ InsertionSort(array, buffer2, compare); for (pull_index = 0; pull_index < 2; pull_index++) { size_t amount, unique = pull[pull_index].count * 2; if (pull[pull_index].from > pull[pull_index].to) { /* the values were pulled out to the left, so redistribute them back to the right */ Range buffer = Range_new(pull[pull_index].range.start, pull[pull_index].range.start + pull[pull_index].count); while (Range_length(buffer) > 0) { index = FindFirstForward(array, array[buffer.start], Range_new(buffer.end, pull[pull_index].range.end), compare, unique); amount = index - buffer.end; Rotate(array, Range_length(buffer), Range_new(buffer.start, index), cache, cache_size); buffer.start += (amount + 1); buffer.end += amount; unique -= 2; } } else if (pull[pull_index].from < pull[pull_index].to) { /* the values were pulled out to the right, so redistribute them back to the left */ Range buffer = Range_new(pull[pull_index].range.end - pull[pull_index].count, pull[pull_index].range.end); while (Range_length(buffer) > 0) { index = FindLastBackward(array, array[buffer.end - 1], Range_new(pull[pull_index].range.start, buffer.start), compare, unique); amount = buffer.start - index; Rotate(array, amount, Range_new(index, buffer.end), cache, cache_size); buffer.start -= amount; buffer.end -= (amount + 1); unique -= 2; } } } } /* double the size of each A and B subarray that will be merged in the next level */ if (!WikiIterator_nextLevel(&iterator)) break; } #if DYNAMIC_CACHE if (cache) free(cache); #endif #undef CACHE_SIZE } /* standard merge sort, so we have a baseline for how well WikiSort works */ void MergeSortR(Test array[], const Range range, const Comparison compare, Test buffer[]) { size_t mid, A_count = 0, B_count = 0, insert = 0; Range A, B; if (Range_length(range) < 32) { InsertionSort(array, range, compare); return; } mid = range.start + (range.end - range.start)/2; A = Range_new(range.start, mid); B = Range_new(mid, range.end); MergeSortR(array, A, compare, buffer); MergeSortR(array, B, compare, buffer); /* standard merge operation here (only A is copied to the buffer, and only the parts that weren't already where they should be) */ A = Range_new(BinaryLast(array, array[B.start], A, compare), A.end); memcpy(&buffer[0], &array[A.start], Range_length(A) * sizeof(array[0])); while (A_count < Range_length(A) && B_count < Range_length(B)) { if (!compare(array[A.end + B_count], buffer[A_count])) { array[A.start + insert] = buffer[A_count]; A_count++; } else { array[A.start + insert] = array[A.end + B_count]; B_count++; } insert++; } memcpy(&array[A.start + insert], &buffer[A_count], (Range_length(A) - A_count) * sizeof(array[0])); } void MergeSort(Test array[], const size_t array_count, const Comparison compare) { Var(buffer, Allocate(Test, (array_count + 1)/2)); MergeSortR(array, Range_new(0, array_count), compare, buffer); free(buffer); } size_t TestingRandom(size_t index, size_t total) { return rand(); } size_t TestingRandomFew(size_t index, size_t total) { return rand() * (100.0/RAND_MAX); } size_t TestingMostlyDescending(size_t index, size_t total) { return total - index + rand() * 1.0/RAND_MAX * 5 - 2.5; } size_t TestingMostlyAscending(size_t index, size_t total) { return index + rand() * 1.0/RAND_MAX * 5 - 2.5; } size_t TestingAscending(size_t index, size_t total) { return index; } size_t TestingDescending(size_t index, size_t total) { return total - index; } size_t TestingEqual(size_t index, size_t total) { return 1000; } size_t TestingJittered(size_t index, size_t total) { return (rand() * 1.0/RAND_MAX <= 0.9) ? index : (index - 2); } size_t TestingMostlyEqual(size_t index, size_t total) { return 1000 + rand() * 1.0/RAND_MAX * 4; } /* the last 1/5 of the data is random */ size_t TestingAppend(size_t index, size_t total) { if (index > total - total/5) return rand() * 1.0/RAND_MAX * total; return index; } /* make sure the items within the given range are in a stable order */ /* if you want to test the correctness of any changes you make to the main WikiSort function, move this function to the top of the file and call it from within WikiSort after each step */ #if VERIFY void WikiVerify(const Test array[], const Range range, const Comparison compare, const char *msg) { size_t index; for (index = range.start + 1; index < range.end; index++) { /* if it's in ascending order then we're good */ /* if both values are equal, we need to make sure the index values are ascending */ if (!(compare(array[index - 1], array[index]) || (!compare(array[index], array[index - 1]) && array[index].index > array[index - 1].index))) { /*for (index2 = range.start; index2 < range.end; index2++) */ /* printf("%lu (%lu) ", array[index2].value, array[index2].index); */ printf("failed with message: %s\n", msg); assert(false); } } } #endif int main() { size_t total, index; double total_time, total_time1, total_time2; const size_t max_size = 1500000; Var(array1, Allocate(Test, max_size)); Var(array2, Allocate(Test, max_size)); Comparison compare = TestCompare; #if PROFILE size_t compares1, compares2, total_compares1 = 0, total_compares2 = 0; #endif #if !SLOW_COMPARISONS && VERIFY size_t test_case; __typeof__(&TestingRandom) test_cases[] = { TestingRandom, TestingRandomFew, TestingMostlyDescending, TestingMostlyAscending, TestingAscending, TestingDescending, TestingEqual, TestingJittered, TestingMostlyEqual, TestingAppend }; #endif /* initialize the random-number generator */ srand(time(NULL)); /*srand(10141985);*/ /* in case you want the same random numbers */ total = max_size; #if !SLOW_COMPARISONS && VERIFY printf("running test cases... "); fflush(stdout); for (test_case = 0; test_case < sizeof(test_cases)/sizeof(test_cases[0]); test_case++) { for (index = 0; index < total; index++) { Test item; item.value = test_cases[test_case](index, total); item.index = index; array1[index] = array2[index] = item; } WikiSort(array1, total, compare); MergeSort(array2, total, compare); WikiVerify(array1, Range_new(0, total), compare, "test case failed"); for (index = 0; index < total; index++) assert(!compare(array1[index], array2[index]) && !compare(array2[index], array1[index])); } printf("passed!\n"); #endif total_time = Seconds(); total_time1 = total_time2 = 0; for (total = 0; total < max_size; total += 2048 * 16) { double time1, time2; for (index = 0; index < total; index++) { Test item; /* TestingRandom, TestingRandomFew, TestingMostlyDescending, TestingMostlyAscending, */ /* TestingAscending, TestingDescending, TestingEqual, TestingJittered, TestingMostlyEqual, TestingAppend */ item.value = TestingRandom(index, total); #if VERIFY item.index = index; #endif array1[index] = array2[index] = item; } time1 = Seconds(); #if PROFILE comparisons = 0; #endif WikiSort(array1, total, compare); time1 = Seconds() - time1; total_time1 += time1; #if PROFILE compares1 = comparisons; total_compares1 += compares1; #endif time2 = Seconds(); #if PROFILE comparisons = 0; #endif MergeSort(array2, total, compare); time2 = Seconds() - time2; total_time2 += time2; #if PROFILE compares2 = comparisons; total_compares2 += compares2; #endif printf("[%zu]\n", total); if (time1 >= time2) printf("WikiSort: %f seconds, MergeSort: %f seconds (%f%% as fast)\n", time1, time2, time2/time1 * 100.0); else printf("WikiSort: %f seconds, MergeSort: %f seconds (%f%% faster)\n", time1, time2, time2/time1 * 100.0 - 100.0); #if PROFILE if (compares1 <= compares2) printf("WikiSort: %zu compares, MergeSort: %zu compares (%f%% as many)\n", compares1, compares2, compares1 * 100.0/compares2); else printf("WikiSort: %zu compares, MergeSort: %zu compares (%f%% more)\n", compares1, compares2, compares1 * 100.0/compares2 - 100.0); #endif #if VERIFY /* make sure the arrays are sorted correctly, and that the results were stable */ printf("verifying... "); fflush(stdout); WikiVerify(array1, Range_new(0, total), compare, "testing the final array"); for (index = 0; index < total; index++) assert(!compare(array1[index], array2[index]) && !compare(array2[index], array1[index])); printf("correct!\n"); #endif } total_time = Seconds() - total_time; printf("tests completed in %f seconds\n", total_time); if (total_time1 >= total_time2) printf("WikiSort: %f seconds, MergeSort: %f seconds (%f%% as fast)\n", total_time1, total_time2, total_time2/total_time1 * 100.0); else printf("WikiSort: %f seconds, MergeSort: %f seconds (%f%% faster)\n", total_time1, total_time2, total_time2/total_time1 * 100.0 - 100.0); #if PROFILE if (total_compares1 <= total_compares2) printf("WikiSort: %zu compares, MergeSort: %zu compares (%f%% as many)\n", total_compares1, total_compares2, total_compares1 * 100.0/total_compares2); else printf("WikiSort: %zu compares, MergeSort: %zu compares (%f%% more)\n", total_compares1, total_compares2, total_compares1 * 100.0/total_compares2 - 100.0); #endif free(array1); free(array2); return 0; }
Zdroj
- https://github.com/BonzaiThePenguin/WikiSort
- https://en.m.wikipedia.org/wiki/Block_sort
- https://en.wikipedia.org/wiki/Block_sort