Interested in a desktop application instead?
See the Lacinato ABX/Shootouter.
|free -- please consider a donation!|
Licenses last until buy-in version + 1
e.g. buy in at 1.3, upgrade through 2.3.
At this price, I can not offer direct support, but please report bugs.
If you live in a country with a low exchange rate to the US dollar, feel free to lower the fee as appropriate.
You can configure the player any way you like: ABX tests, shootouts, a combination of both, or just a simple player. It offers more advanced features as well: you can specify the files to be compared directly in the HTML, or you can use an XML file. You can "tag" the files in the XML and a set of filtering menus is automatically created that allows users to select different groups of files to create their own comparisons. You can associate preview images with those tags which are displayed in associated <div> tags.
The entire player can be styled in CSS, with ids provided for all elements. Multiple players can coexist on the same page.
- ABX testing
- ABCD..X testing (more than two files)
- Optional complex filtering of tagged files
- Jump from one file to the next while maintaining play position
- Per-file gain adjustment
- Configurable play range / looping
- Supports whatever file types the browser supports
- Calculates the p-value (which leads to the "confidence that results are better than chance")
Contact me if you have questions/corrections/suggestions.
Important notes on ABX testing:
- Pick two or more files that vary only in the way you want to test yourself for (except for levels -- those can be adjusted.)
- Click "Start AB*X" or "Start Shootout" to start the tests, and click "choose" to make your choice.
- You can use the per-file "play" buttons while ABX testing, but to do a more proper test, that could be considered cheating. (You must use them for a Shootout, of course.)
- After 15 or so tests, click "Show/Hide Results" to see how you did.
- For ABX testing, the "Confidence" value is the percentage chance that your results are better than chance. E.g. if you got a 95% accuracy result but the Confidence value is 60%, that means that the accuracy result is not statistically significant. A common standard is to require 95% or better confidence in a result before considering it meaningful. For example, choosing 12 out of 16 correctly results in an accuracy of 75% with a a confidence of 96% -- this is a much more meaningful result, implying strongly that you can identify 75% correctly and that that result is not due to chance.
- Some recommend not to do more than 25 or so tests for a given pair of files, because listening fatigue will start to bias your results.
- Taking sets of tests over and over for the same pair of files until you get the result you are looking for renders your results meaningless (after all, even a 95% confidence result is wrong 5% of the time, so if you run sets of trials 20 times, guessing completely at random, one of them will give you that impressive-seeming result.) You should be able to reliably repeat a result for it to count. Pulling up a couple files and getting a 12 out of 15 doesn't mean much unless you can repeat it. Try doing multiple tests over several days
- On that theme: the internet is a big place, so if you post two totally identical files and 300 people download them and ABX test them in a trial of 20 comparisons, 15 of those people will likely get 15 or more correct with a 95%+ confidence result, purely by chance, and will proudly post their results. This obviously does not mean that a small group of people have golden ears that can hear the difference between two digitally-identical files, so beware the impressions you get from reading forum posts.
- Lacinato WebABX uses an exact binomial distribution algorithm to calculate the p-value.