Why is the test count too high when running tests?
If you've seen the situation where the number of completed tests exceeds the number of expected tests, you've probably wondered why the counts don't match:
The expected test count number is based on the number of test methods
in the assembly. This means we count 1x for every
. In the case of theories, that single test method may result in multiple tests if the data source(s) for that theory end up providing additional data, which causes the final
count to be greater than the test method
So why don't we enumerate all the test data ahead of time so we can get an accurate count? The way the test API works in xUnit.net, we only have run granularity down to the /test method/, not the individual /test/. So when we ask to run a
, it will end up potentially returning multiple results from that one single test method. As such, our enumeration and running APIs are at the test method level. Additionally, we have no way of knowing whether the enumeration of the test data will
be expensive (because it's stored in a database) or even consistent (if the data provider returns not only random data, but a random quantity of data). We felt that enumerating the count of tests needed to be a relatively fast operation, since it's
done purely for cosmetic display purposes (in the case of the console and MSBuild runners) or because it will cause the UI to become unresponsive (in the case of GUI runners).