I’m exploring a bunch of different possibilities at work, looking for an optimal set of parameters for an image processing problem. For this algorithm I’m looking at, there are three independent tunable parameters, so for each combination of parameters I’m generating a sample output. This gives me a data cube, with each of the three axes roughly 20 elements in length to explore the space adequately.
Now, at each point in this data cube, the output is an image. With about 10 megapixels.
And each pixel of the image contains not a single value (e.g. a greyscale image), not three values (a full colour image), but six different data values (an actual greyscale “image”, plus a 2D alignment vector at each point, plus 3 separate measures of confidence values in those).
And this is for one of my potential competing algorithms, of which I have six different ones to try. And every time we stop to think about it a bit, we come up with new algorithms, or other possible parameter tweaks that might improve the older ones.
This is why you get paid the big bucks! ;)
Are you using a test-reducing experiment design?
Not sure what you mean exactly – if that’s a standard phrase. But I’m definitely doing stuff to intelligently limit the number of test cases we need to run.