At one point in the bad old days of CSS hackery, I created a "mastergrid" showing which browser applied particular CSS hacks. Collecting this data was tedious enough to get me thinking about automating the process, and later this snowballed into an effort to design an automated browser-testing tool.
Comprehensive data on browser bugs and support is honestly difficult to compile. Although various benchmarks and test suites are available, they are by no means comprehensive, and tend to generate data that is at best difficult to find and use. As the browser wars heat up once again, the number of browser versions being used in the wild is starting to heat up as well. As a result I am hoping to continue my work on this project.
A quick example of the kind of results can be seen here, and access to the web application is available here (currently broken). The old, out-dated mastergrid is also still available.
Defining what constitutes a pass is not always trivial. Especially when a test could have a number of corner cases, crash the browser, or even demonstrate an exploit. Add to this the task of reducing the server's role as much as possible, while still serving particular test cases in particular encodings, or with varying types of additional server-based content: automating test-cases is not always trivial, especially in a cross-browser manner.
There are a number of administrative concerns such as specifying which "mode" a browser was in, permitting "touch-ups" or "re-runs", and dealing with differences in browser UI (for mobile or even non-screen browsers). Add to this the need to categorize test cases and minimize the amount of data being transmitted, as there could be tens of thousands of tests to run. And of course, none of these considerations are useful without results that are easy to compare, including at-a-glance visualization of special cases.
Moreover, without the time to create the actual test cases themselves, the entire project is only slowly moving along. My eventual aim is to create a test-case submission form, which allows users to submit tests for any technology from CSS to SVG, or even plugins, but this is perhaps the greatest obstacle remaining for this to truly be a "comprehensive" test suite that includes the various tests already developed by standard bodies.
I opted to use Ruby on Rails, simply because this is precisely the type of project that Rails is well-suited for. Simple database migrations could be used to add, remove or revise test-cases, and the analysis and visualization of all of the data could be easily handled by a next-gen scripting language.
I have also collected as many versions of browsers as I legally can, including the use of virtual machines and emulators to catch data for mobile browsers.