False positive rules clarified, Dr. Web results recalculated.
VB plans a review of the test procedures for the VB100 comparative testing and certification program, after some issues arising from the recent Linux comparative (see the April issue of VB) have brought to light a lack of clarity in the publicly-available procedures document (here).
The methodology of the tests insists that, to qualify for a VB100 award, a product must not raise any false positives while scanning our set of known-clean files. However, files flagged as 'suspicious' are allowed and do not count as false positives. In the recent Linux comparative, ESET's Nod32 product flagged one clean file as 'probably unknown TSR.COM.EXE virus', and was thus adjudged ineligible for the award. This decision was queried by ESET on the basis that such a detection may be considered a suspicious flag rather than a full false positive.
VB has stood by the decision to deny the award in this case, as it was felt that the terminology used to flag the detection was too strong to be considered merely 'suspicious', and also as the same flag has been and continues to be counted as a detection when scanning infected testsets. Files flagged as such were also included in the final 'infected files' count. It is clear that it would be improper to allow the same marker to count as a detection but not as a false positive.
'Nod32 has an outstanding record in the VB100', said John Hawes, Technical Consultant at Virus Bulletin, in charge of VB100 testing. 'The product has proven successful in more tests than any other, and has never missed an In-The-Wild virus since our tests began in 1998. It also consistently shines in our speed testing, and in my experience is one of the most flexible and useable products on the market. I'm sure this will be a minor blip in their VB100 record, and that ESET will continue to produce excellent results in future tests.'
The false positive flag is thought to have been caused by an erroneous increase in the heuristics level in the version of the product submitted for testing.
In a separate issue, closer analysis of the last set of results has revealed some errors in the detection figures shown for Doctor Web's Dr. Web product. These errors were due to differences in the presentation of infections in certain filetypes within the Dr. Web logs, resulting in automated analysis tools failing to record the detections. Further investigation and retesting has confirmed that the version of Dr. Web submitted for testing did in fact prove capable of detecting all samples in our 'Macro', 'File Infector', 'Linux' and 'Worms and Bots' testsets, scoring 100% in all of these categories.
A small number of samples in the 'Polymorphic' set were correctly recorded as misses, and as the failure to detect three samples from the core WildList set is also confirmed, the product remains ineligible for the VB100 award. Appropriate adjustments to our online results pages will be made as soon as possible, and VB extends apologies to Doctor Web for these errors.
18 April 2007 |