One approach I *did* consider early on was to define a kind of 'Pseudo Hamming Distance': i.e. define PHD such that any two single combination wagers differing in one leg's entrant number have PSD = 1
e.g.
A and C have PHD = 2, but PSD > 1 is not of interest here.
Construct an adjacency list for all PHD = 1 -> an undirected graph where nodes are combinations and edges link combinations with PSD = 1-> one starts to see various interesting hypercube type structures in the graphviz plots which are analogous to grouped wagers. Easy enough to see this by working backwards from (say)
1+2+2/4+5+6/7+8+9/10+11+12, exploding it by CP, constructing the PHD=1 adjacency list, turning this into a DOT file and firing up graphviz.
These graphviz pictures look very lovely to the human eye. Unfortunately, computational extraction of such features in graph spaghetti land, let-alone the issue of extracting maximal features seems to be infeasible before the arrival of Quantum Computers, Maxwell's Demons in every car engine, and Unicorns in the back garden.
I also considered a set cover approach, but got a little dispirited when I sat down to compute the number of potential covering sets - CP of power sets of race field sizes!
(1) We have estimated win probabilities for the entrants in each of the race legs. Assume that I have a black box which produces these estimated probabilities.
(2) We generate the CP of the entrant numbers in the race fields: Leg 1 Entants x Leg 2 Entrants x..
For each combination generated by the CP, we apply some formula to the probabilities associated with combination entrant numbers to arrive at a decision about whether or not that particular combination represents a viable wager.
(3) If a combination does represent a viable wager, we then need to ensure that we have sized its wager amount in a correct and optimal manner.
.
.
Unfortunately these are hard to compress well - and that is ignoring issues of what to do about differing wager sizes when we can match two combinations for merging together.
So one possible hack is to insert step 1(a) where we k-means cluster the win probabilities in each leg, such that we end up with multiple entrant numbers per cluster mean (which is a probability). We then proceed through steps 2 and 3, but instead of generating single combinations, we are now directly producing wagers which look like:
In this case, k-means clustering will have clustered 1,6,12 together with cluster mean/probability of (say) 0.1, etc.
This a bit of a hack, as clearly it will result in the wagers not covering exactly the same single combinations as if we had generated pure single wagers without k-means clustering. Hence my interest in doing set comparisons!
Another brute force greedy approach is to start with single combinations and do something like this:
def merge_combinations(combinations):
"""Greedily merge all combinations on one field, then next field, etc.
Parameter combinations is structured so:
[[[1], [2], [3], [4], [5], [6], [7]],
[[1], [2], [3], [4], [5], [6], [8]]]
The input combinations are modified *in place*. After calling this
method, combinations would become:
[[[1], [2], [3], [4], [5], [6], [7,8]]]
"""
def rotate_left(combi):
combi.append(combi.pop(0))
for do_it_seven_times in xrange(7):
combinations.sort()
k = 1
while k < len(combinations):
if combinations[k-1][:6] == combinations[k][:6]:
combinations[k-1][6].extend(combinations.pop(k)[6])
combinations[k-1][6].sort()
else:
k += 1
map(rotate_left, combinations)
def merge_demo():
from pprint import pprint
combis = [[[1], [1], [1], [2], [2], [12], [1]],
[[1], [1], [1], [2], [6], [3], [1]],
[[1], [1], [1], [2], [6], [6], [1]],
[[1], [1], [1], [2], [6], [7], [1]],
[[1], [1], [1], [2], [6], [7], [2]],
[[1], [1], [1], [2], [6], [12], [1]],
[[1], [1], [1], [2], [8], [7], [1]],
[[1], [1], [1], [2], [8], [12], [1]],
[[1], [1], [1], [2], [9], [3], [1]],
[[1], [1], [1], [2], [9], [7], [1]],
[[1], [1], [1], [2], [9], [12], [1]],
[[1], [1], [1], [3], [2], [7], [1]],
[[1], [1], [1], [3], [2], [12], [1]]]
pprint(sorted(combis))
# 13
print len(combis)
merge_combinations(combis)
pprint(sorted(combis))
# ->
# [[[1], [1], [1], [2], [6], [6], [1]],
# [[1], [1], [1], [2], [6], [7], [1, 2]],
# [[1], [1], [1], [2], [6, 8, 9], [12], [1]],
# [[1], [1], [1], [2], [6, 9], [3], [1]],
# [[1], [1], [1], [2], [8, 9], [7], [1]],
# [[1], [1], [1], [2, 3], [2], [12], [1]],
# [[1], [1], [1], [3], [2], [7], [1]]]
# 7
print len(combis)
print
print
combis = [[[13], [6, 10, 13], [3, 6], [4], [2, 8], [7], [1]],
[[13], [6, 10, 13], [3, 6], [4], [9], [7], [1]],
[[13], [6, 10, 13], [3, 6], [4], [6], [12], [1]],
[[13], [6, 10, 13], [3, 6], [4], [6], [7], [1]],
[[13], [6, 10, 13], [5, 10], [10], [2, 8], [7], [1]],
[[13], [6, 10, 13], [5, 10], [10], [9], [7], [1]],
[[2, 9], [11], [7, 13], [6, 12], [9], [12], [3]],
[[2, 9], [11], [7, 13], [6, 12], [9], [12], [2]],
[[2, 9], [11], [7, 13], [6, 12], [9], [12], [1]],
[[2, 9], [11], [7, 13], [6, 12], [9], [7], [5, 8, 14]],
[[2, 9], [11], [7, 13], [6, 12], [9], [7], [7]],
[[2, 9], [11], [7, 13], [6, 12], [9], [7], [4]]]
pprint(combis)
# 12
print len(combis)
merge_combinations(combis)
# ->
# [[[13], [6, 10, 13], [3, 6], [4], [2, 6, 8, 9], [7], [1]],
# [[13], [6, 10, 13], [3, 6], [4], [6], [12], [1]],
# [[13], [6, 10, 13], [5, 10], [10], [2, 8, 9], [7], [1]],
# [[2, 9], [11], [7, 13], [6, 12], [9], [12], [1, 2, 3]],
# [[2, 9], [11], [7, 13], [6, 12], [9], [7], [4, 5, 7, 8, 14]]]
pprint(combis)
# 5
print len(combis)
I wrote a version of this in PLT Scheme also a while back. IIRC, I had to flip everything around and perform actions at the head of lists instead of at the tail of Python lists (more like arrays really). Will have to try my hand at a Haskell version later and see how it goes - although I have the suspicion that feeding such a routine 1M single combinations is going to run into all sorts of space issues.
Unfortunately this approach never seems to result in more than a 10 x compression factor. Can do much better with the k-means approach - at the cost of missing some theoretically desirable combinations and betting on some spurious combinations.
As always, would be very interested to hear if you have any ideas/insights on first seeing this kind of problem. I'm almost certainly too fixed in my thinking due to long association with the wagering problem domain and probably cannot see the wood for the trees!