本帖最后由 hcszheng 于 2012-5-20 10:15 编辑
无奈啊
不让在别处发
1. 介绍
VB 100% Award奖项是英国著名独立病毒测试中心Bulletin、以世界性组织Wild List病毒资料库作为病毒来源,对世界各国防病毒软件进行测试后,诊断率100%、误诊率0%时赋予的奖项。通过了VB测评的产品,都将获得一个VB 100%的认证标志,这个标志说明此测试产品已经用它自己默认的模式,在宽泛的测试病毒样本中达到了100%的察觉防范,并且没有对被选择出来的未染毒文件进行错误的操作。VB每年在偶数月举行6次安全产品测评。因经济独立、观点独到,VB的测评获得了专业的口碑。其中立、公平、专业的定位使得它很快在反病毒安全领域获得权威地位。
*转自 百度百科
2.重要性
Vb100是个十分权威的科学的检测杀毒软件的机构,想要成为了解杀毒软件的电脑高手,就应该学会利用vb100.。而普通用户通过分析vb100的测评报告也可以挑选出适合自己PC的杀毒软件。
3.权威性
1.非商业性组织,不收取任何费用,不受任何非技术性因素的影响;
2.历史及影响力。VirusBulletin于1989年成立,至今已有23年历史。从上个世纪90年代开始,VB100%测试就广泛被业界所认识接受。很多国际上知名的杀毒软件厂商也都曾被斩落马下,由于所有参评厂商的成绩都会公开发布,所以缺乏信心的厂商一般都望而却步;
3.测试过程极其严格。VirusBulletin将根据各大杀毒软件检测病毒的成功率,扫描速度、性能等内容来进行比较。评测没有打分,在VirusBulletin看来只有两个结果:通过或者不通过。因为在他们看来,杀毒软件的特性决定了没有中间妥协的余地,对病毒,只有杀和不杀;对性能,只有好和不好。
*转自“百度百科”
4.使用方法
以最近的测评结果为例

5.RAP TEST
可以看到这么一个LOGO,其中86.8%即是RAP Test(最新病毒检出率)的结果。这一测试比较严格,杀毒软件提交测试以后,VB100把每周收集到的新样本都加入测试中,一共测试4周的新样本,算出4周的平均成绩,这就是RAP TEST的结果。

以下为谷歌翻译 翻译出的文件 很不准确 由于本人英语不好 无法人工翻译 请读者见谅
在这张图的编制,任何RAP的测试中取得的成绩,其中有问题的产品产生误报的一个或多个不计入该产品的平均得分。产品进入仅用于生成图表(或只有一个结果集都算在其他测试中,由于误报)被标记为红色的比较 - 这表明得分可能被认为是不太可靠的指标检测能力比那些对他们来说,平均措施,在多个测试。
程序
RAP的测试运行,按照下列程序:
RAP的样本被分成四组。在产品截止日期后,从1至7天期间收集的一套称为“周+1。 '周-1“涵盖的期限一天本身和六个月前几天。 '周-2“一套包括样品聚集在截止日期前8至14天,和”周-3'一套由聚集在截止日期前15至21天的样本。
所有样品都算作从他们首先看到Virus Bulletin的测试实验室处理系统,或与他们共收到批次,以较早者为准的日期标签的约会。样本来源不考虑编译时套。
样品进行了验证,使用我们的标准实验室的协议,并排除某些不适当的样品类型分类。这些措施包括广告软件和其他一些产品,需要其他组件操作的部分样品,并从外部来源收到的真正的病毒原始样本认为“可能有害”的项目。自我复制的病毒在内部被复制,只有新的重复考虑列入在RAP集。
样品被评为尽可能准确的患病率和意义,使用流行率的数据,从广泛的来源。加权集删除最流行项目。分数加权,以减少大量类似项目的影响 - 例如,morphed的木马服务器端和真正的病毒复制的大批量较低的比重,比一次性的唯一项目。
对于每一个产品进入检讨,我们测量检测按需扫描过程中使用的标准;使用默认产品设置和忽略标记为“可疑”只检测。在每次测试RAP的象限使用的分数被标记为“主动”(“周+1”得分)和“无功”(周-1,-2和-3的成绩平均)。在四个测试RAP的平均象限的分数是每一个得分的平均值,在过去四年的测试。
在每个测试象限,误报问题测试的产品是通过产品标识标注打击。为四个测试RAP的平均象限,这样的成绩是calcuating平均时排除。
产品标识上象限图表可以简化或省略,以保持可读的图表。
英文原文
In the compilation of this chart, any RAP score achieved in a test in which the product in question generated one or more false positives is NOT counted towards that product's average score. Products that entered only one of the comparatives used to generate the chart (or for which only one set of results are counted due to false positives in other tests) are marked in RED - this indicates that the score may be considered less reliable an indicator of detection capability than those for whom an average of measures across several tests are available.
Procedures
The RAP tests are run according to the following procedures:
RAP samples are split into four sets. The set known as 'week +1' is gathered in the period from one to seven days after the product submission deadline. The 'week -1' set covers the deadline day itself and the six previous days. The 'week -2' set includes samples gathered eight to 14 days before the deadline, and the 'week -3' set consists of samples gathered 15 to 21 days before the deadline.
All samples are counted as dating from the point at which they are first seen by the Virus Bulletin test lab processing systems, or the date label of the batch with which they were received, whichever is earlier. Sample sources are not considered when compiling sets.
Samples are validated using our standard lab protocols, and classified to exclude certain inappropriate sample types. These include adware and other items considered 'potentially unwanted' by some products, partial samples requiring other components to operate, and original samples of true viruses received from external sources. Self-replicating viruses are replicated in-house and only new replications are considered for inclusion in the RAP sets.
Samples are rated by prevalence and significance as accurately as possible, using prevalence data from a wide range of sources. Sets are weighted to remove the least prevalent items. Scores are also weighted to minimise the impact of large quantities of similar items - for example, large batches of server-side morphed trojans and replicated true viruses are given a lower weighting than one-off unique items.
For each product entered for a review, we measure detection using our standard on-demand scanning procedure; this uses default product settings and ignores detections labelled as 'suspicious' only. Scores used in the per-test RAP quadrants are labelled 'Proactive' (the 'week +1' score) and 'Reactive' (the average of the scores for weeks -1, -2 and -3). Scores used in the four-test RAP averages quadrant are the averages of each score over the last four tests.
In the per-test quadrants, products with false positives in the test in question are marked by striking through the product identifier. For the four-test RAP averages quadrant, such scores are excluded when calcuating averages.
Product identifiers on quadrant charts may be simplified or abbreviated to keep the chart readable.
附:
杀毒软件名称 累计参赛战绩
Eset Nod32 70 Pass, 3 Fail from February 1998 (4 tests not entered)
Sophos 60 Pass, 16 Fail from January 1998 (2 tests not entered)
Kaspersky 56 Pass, 20 Fail from January 1998 (1 tests not entered)
Symantec 55 Pass, 7 Fail from January 1998 (13 tests not entered)
McAfee 46 Pass, 23 Fail from January 1998 (8 tests not entered)
avast! 44 Pass, 23 Fail from January 1998 (11 tests not entered)
AVG 39 Pass, 23 Fail from February 1998 (15 tests not entered)
F-Secure 33 Pass, 12 Fail from January 1998 (32 tests not entered)
Avira 30 Pass, 5 Fail from February 2005 (6 tests not entered)
BitDefender 29 Pass, 10 Fail from July 2000 (24 tests not entered)
G Data 28 Pass, 12 Fail from November 2000 (21 tests not entered)
Kingsoft 12 Pass, 8 Fail from October 2006 (9 tests not entered)
PC Tool 7 Pass, 4 Fail from June 2007 (15 tests not entered)
Rising 6 Pass, 7 Fail from December 2007 (11 tests not entered)
Qihoo 6 Pass, 1 Fail from December 2009 (4 tests not entered)
MSE 4 Pass, 1 Fail from December 2009 (6 tests not entered)
*转自“新浪共享”----用户 www888
有修改
后记
此文为本人结合百度百科 卡饭论坛 新浪资料等多家网站编辑整理
感谢为我提供帮助的人和网站
多有谬误 请多见谅
幻之瞳澈 2012年5月18日
原帖地址 tieba.baidu.com/p/1600337561
|