查看: 10314|回复: 79
收起左侧

[技术原创] VB1004月测试,国产杀软部分的相关报告翻译

  [复制链接]
猪头无双
头像被屏蔽
发表于 2011-6-26 15:43:18 | 显示全部楼层 |阅读模式
本帖最后由 猪头无双 于 2011-6-26 16:57 编辑




Setting up Windows XP has become such a familiar and oft-repeated task that it requires very little effort these days. In fact, we simply recycled bare machine images from the last run on the platform a year ago, tweaking and adjusting them a little to make them more at home on our current hardware and network set-up, and re-recording the snapshots ready to start testing. As usual, no updates beyond the latest service pack were included, and additional software was kept to a minimum, with only some network drivers and a few basic tools such as archivers文档, document viewers and so on added to the basic operating system. With the test machines ready good and early, test sets were compiled as early as possible too. The WildList set was synchronized with the January 2011 issue of theWildList, released a few days before the test set deadline of 16 February. This meant a few new additions to the core certification set, the bulk of which were simple autorun worms and the like. Most interesting to us were a pair of new W32/Virut strains, which promised to tax the products, and as usual our automated replication system churned out several thousand confirmed working samples to add into the mix. The deadline for product submission was 23 February, and as usual our RAP sets were built around that date, with three sets compiled from samples first seen in each of the three weeks before that date, and a fourth set from samples seen in the week that followed. We also put together entirely new sets of trojans, worms and bots, all gathered in the period between the closing of the test sets for the last comparative and the start of this month’s RAP period. In total, after verification and classification to exclude less prevalent items, we included around 40,000 samples in the trojans set, 20,000 in the set of worms and bots, and a weekly average of 20,000 in the RAP sets. The clean set saw a fairly substantial expansion, focusing on the sort of software most commonly used on home desktops. Music and video players, games and entertainment utilities dominated the extra 100,000 or so files added this month, while the retirement of some older and less relevant items from the set kept it at just under half a million unique files, weighing in at a hefty 125GB. Some plans to revamp改进 our speed sets were put on hold and those sets were left pretty much unchanged from the last few tests. However, a new performance test was put together, using samples once again selected for their appropriateness to the average home desktop situation. This new test was designed to reproduce a simple set of standard file operations, and by measuring how long they took to perform and what resources were used, to reflect the impact of security solutions on everyday activities. We selected at random several hundred music, video and still picture files, of various types and sizes, and placed them on a dedicated web server that was visible to the test machines. During the test, these files were downloaded, both individually and as simple zip archives, moved from one place to another, copied back again, extracted from archives and compressed into archives, then deleted. The time taken to complete these activities, as well as the amount of RAM and CPU time used during them, was measured and compared with baselines taken on unprotected systems. As with all our performance tests, each measure was taken several times and averaged, and care was taken to avoid compromising the data – for example, the download stage was run on only one test machine at a time to avoid possible network latency issues. We hope to expand on this selection of activities in future tests, possibly refining the selection of samples to reflect the platforms used in each comparative比较, and perhaps also recording the data with greater granularity. We had also hoped to run some trials of another new line of tests, looking at how well products handle the very latest threats and breaking somewhat with VB100 tradition by allowing both online updating and access to online resources such as real-time ‘cloud’ lookup systems. However, when the deadline day arrived and we were swamped with entrants, it was clear that we would not have the time to dedicate to this new set of tests, so they were put on hold until next time. The final tally came in at 69 products – breaking all previous records once again. Several of these were entirely new names (indeed, a couple were unknown to the lab team until the deadline day itself). Meanwhile, all the regulars seemed to be present and correct, including a couple of big names that had been missing from the last few tests. With such a monster task ahead of us, there was not much we could do but get cracking, as usual crossing all available digits and praying to all available deities for as little grief as possible.



对我们来说,设置XP系统是一项没有多少难度的,重复多次的,熟悉的工作。实际上,我们用的是一年前同一个平台测试时剩下的镜像,并对镜像做了微调,并使其更搭配我们当下家用电脑的硬件配置和网络设置,并备份了快照。按照惯例,除了SP3的补丁包之外,我们没对系统进行升级,附加的程序也仅仅是一些文档阅读器、网络驱动神马的,以保证附加程序最少。随着系统加载的完毕,测试的设定工作也随之进行。Wild List收集截止至2011年1月,发布日期为2月16号之前。这意味着这次样本中有了几个新面孔:包括一群简单的自动运行蠕虫、一对令我们感兴趣的W32/Virut变种(我们的自动生成系统根据这对变种自动生成了几千个有效的活体样本)。参赛厂商的样本收集截止日期是2月23号,我们的RAP测试也随之展开,在包括23号在内的头三周中,收集首次出现的样本,每周修改一次设置;第四周是23号之后的下一周,测试厂商停止收集样本入库,全靠自身技术对样本做出反应。我们也收集了自从上次测试结束开始,到本月RAP测试开始前的一段时间内出现的木马、蠕虫和bots.总之,经过确认和剔除某些非主流的样本之后,我们得到了4W个左右的木马样本、2W个左右的蠕虫样本,并且在RAP测试中每周大概也有2W左右的样本。误报测试中的样本也对日常家用系统中出现的软件做了重点采样,包括音乐、视频播放器、游戏程序等大约10W个样本,同时剔除了50W过时的样本,占体积125GB。一些针对速度测试的新设定在本次测试中被暂时叫停,或许他们会出现在未来的测试中。然而,我们本次增加了系统性能测试,并再次选用了一些家用系统中常见的程序作为测试样本。这项测试的目的是再现一个简单的、标准化的测试平台。包括测试运行程序响应多长时间;占用了多少实体/虚拟内存;和杀软的适应性互动如何等等。我们随意选择了几千个不同类型,不同体积的视频、音频、图片文件,储存在一个测试机可见的专用的网络服务器上。测试中,这些文件将按照zip格式下载、解压、移动、复制、打包、删除。我们将把测试机上的RAM/CPU占用和所花费的时间总和与标准机上同样进行这些行为所花费的时间和占用进行比较。我们的全平台测试将进行多次,并取平均值,以免造成不公平的测试结果,我们期望这项测试能够常态化。同时我们也希望进行新的测试,比如在联网状态下,一款杀软的实时处理最新威胁的能力,换句话说,我们想打破惯例,测试下云。然而,随着截止日期的临近,我们忙着应付参赛厂商,所以没有更多的时间来测试云,希望下次能得偿所愿。本次一共有69家厂商参赛,有不少冷面孔是头一次听说。同时,某些过去未露面的国际大牌和一些常客也齐聚本次测试,我们期待他们能在本次测试中发挥水平,获得有优异的表现。

本帖子中包含更多资源

您需要 登录 才可以下载或查看,没有帐号?快速注册

x

评分

参与人数 6经验 +50 人气 +7 收起 理由
皇甫暮云 + 50 人工翻译辛苦
鲁路修 + 1 精品文章
7even + 1 版区有你更精彩: )
七宝 + 3 0了...
arsh + 1 很快要出去不能上卡饭了,支持你

查看全部评分

猪头无双
头像被屏蔽
 楼主| 发表于 2011-6-26 15:43:44 | 显示全部楼层
本帖最后由 猪头无双 于 2011-6-28 08:44 编辑




Qihoo is another of the wealth of solutions active in the bustlingChinese market space – this one based on the BitDefender engine. Having entered our tests on several occasions in the last couple of years, the product has a decent record of passes – but has also put us through some rather odd experiences.
The latest version came as a 110MB install package, including signatures from a few days before the submission deadline. Set-up was fast and easy, with no need to restart
and the process was complete in half a minute or so. The interface is fairly attractive, with bright colours and clear icons, a decent level of configuration options and a decent approach to usability. Stability seemed OK, and the oddities noted in previous tests were kept to a minimum. However, once again we noted that, although the on-access component claimed to have blocked access to items, this was not the experience of our opener tool, and often the pop-ups and log entries would take some time to appear after access was
attempted (and apparently succeeded) – implying that the real-time component runs in something less than real time. This approach probably helped with the on-access speed measures, which seemed very light, while on-demand scans were on the slow side. RAM consumption was high, although CPU use was about average, and impact on our set of everyday jobs was not heavy. Detection rates, when fi nally pieced together, proved just as excellent as we expect from the underlying engine, with very high scores in all areas, and with no issues in the core sets a VB100 award is duly及时 earned. Since its fi rst entry in December 2009, Qihoo has achieved six passes and a single fail, with three tests not entered; the last six tests show three passes and a fail from four entries.

奇虎是另一款占据中国广大市场的杀毒软件,采用BD引擎。作为曾经参加过我们几次测试的产品来说,奇虎有很好的通过记录,但也给我们来了几次奇异的经历。
最新版安装包有110M,包括截止日期前的病毒特征码。安装过程简单迅速,耗时1分半,而且不用重启。界面简洁生动,搭配亮色清晰的按钮,搭配得体的层次分明的设置选项,具有良好的易用性。稳定性还算OK,早前测试中出现的奇怪弹窗被减少到最小。但是我们再次发现,尽管实时监控组件宣称(?)拦截住某些样本的进程,但总是要过一会,才会有弹窗和日志露面,这意味着实时监控组件有时候不是实时起作用。这个方法(指实时监控的弹窗延迟)能帮助优化文件的访问速度。(不是扫描速度),这让它(360杀毒)显得非常轻巧。
(也就是我们一直都知道的,360弹窗延迟,是为了减少打开文件时的延时问题)但是按需扫描的速度却是慢速党。RAM占用很高,CPU占用平均对我们的单项测试的影响不是很大。检测率方面和我们设想的一样出色,估计是用的BD引擎的缘故。各项得分都很高,毫无疑问VB100奖项该得。自09年12月加入VB测试起,奇虎通过六次,挂掉一次;缺考三次。

本帖子中包含更多资源

您需要 登录 才可以下载或查看,没有帐号?快速注册

x
猪头无双
头像被屏蔽
 楼主| 发表于 2011-6-26 15:44:03 | 显示全部楼层
本帖最后由 猪头无双 于 2011-6-26 16:58 编辑




Keniu has been a regular participant in the last few tests, having fi rst entered in the summer of last year. The company has recently formed an alliance with fellow Chinese security fi rm Kingsoft, but so far there have been no signs of a merging of their solutions, with Keniu still based on the Kaspersky engine. The install package is a fraction under 100MB, including all required updates, and the set-up process is fast and simple, with only a few steps, no need to reboot and everything done in less than a minute. The interface is bare and minimalist, with two basic tabs, a few large buttons and a basic set of confi guration controls. With sensible defaults and smooth stable running the tests were out of the way in no time. Scanning speeds were somewhat on the slow side, especially in the archives set, with archives probed very deeply by default. RAM and CPU usage were on the low side, and impact on our activities bundle was not too high. Detection rates were excellent, as expected from the solid engine underpinning the product, with very high fi gures in all sets. The clean set threw up no problems, and the WildList was handled fi ne on demand, but in the on-access run a single item was marked as missed by our testing tool. Suspecting an error, we reinstalled and repeated the test, this time fi nding several dozen items missed, including the one not spotted the fi rst time, and the product’s internal logs matched those of our testing tool. Running a third install showed another selection of misses – even more this time. In the end, no changes to the product settings or the way the test was run could prod the product into functioning properly. This rather baffl ing result denies Keniu a VB100 award this month; the vendor’s record shows three consecutive passes in its earlier three entries.

可牛也是最近几次测试的常客,去年夏天开始参加测试的。可牛公司最近被并入金山,但是并没传出可牛杀毒要融进金山毒霸的消息中,所以我们还是把可牛算作一款OEM卡巴引擎的杀软。安装包体积不到100M,包括所需的全部更新文件,安装过程迅速简单,安装步骤很少,不需要重启,整个过程结束不到1分钟。界面简洁,包括两栏布局,几个大按钮,和设置部分。由于可视的默认设置和平稳的运行,测试很快开始。检测速度较慢,尤其是碰到压缩包文件的时候,这是因为默认设置对压缩包做深入扫描的缘故。RAM和CPU占用较低,对我们的文件包影响不大。检测率相当高,这是OEM的引擎的功劳,这使得可牛的各项分数都不错。误报测试很好,Wild List在按需扫描上表现不错,但是在实时监控时漏掉一个样本。由于怀疑是程序错误,所以我们重新安装,重新测试,结果发现漏报更多,其中包括第一次漏报的那个样本,并且程序内部日志与我们的检测工具的检测结果相吻合。而第三次运行的时候,比前两次失误更多。最后,在不改动产品设置和测试方法的前提下,我们认为出现这种结果是产品自身的问题。所以我们取消了这次滴颁奖,记录显示可牛曾连续3次通过本测试。

本帖子中包含更多资源

您需要 登录 才可以下载或查看,没有帐号?快速注册

x
猪头无双
头像被屏蔽
 楼主| 发表于 2011-6-26 15:44:24 | 显示全部楼层
本帖最后由 猪头无双 于 2011-6-26 16:59 编辑





KIS advanced 版

Kingsoft is a major player in the Chinese market, and has been a regular in our comparatives since its first appearance
in 2006. The vendor came into this month’s test looking for a change of fortune, after a string of tricky tests upset by problems with polymorphic viruses in our WildList sets.The vendor’s ‘Advanced’ version came as a compact 68MB
installer, which runs through simply in a handful of standard steps with no reboot required. The product interface is bright and cheerful – not the most visually appealing, but clean and simply laid out, with a basic but functional set of confi guration controls. Operation was stable and solid throughout, and the tests were completed in good time. Scanning speeds were not outstanding, but on-access lag times were not bad, and while RAM use was a little higher than some, CPU use was below average, as was impact on our suite of standard activities. Detection rates were far
from stellar, with low scores in all our sets. The trojans set was particularly poorly covered, and RAP scores fl uctuated
unpredictably but never achieved anything close to a decent level. Nevertheless, the core certifi cation requirements were met, with no problems in the WildList or clean sets, and a VB100 award is duly earned. The last two years show six passes and four fails, with only the two Linux comparatives not entered; three of those fails were in the last six tests.

金山是国产杀软市场的主要玩家之一,并且自06年起就是VB测试的常客。本月测试的参赛版本是为了改变一下(过去的低迷)。经过了一系列的复杂的测试,在多态病毒测试中,金山由于wild list的设置问题乱了手脚。所谓“高级版”有68M的小巧安装包,安装过程简单,只需要几步即可,而且不需要重启。程序界面明亮喜庆——不像常见的杀软界面,布局简单,同时设置选项丰富却简单。运行稳定,良好,测试时间很短。检测速度并不突出,但是实时扫描的时间却不坏,RAM占用稍高,CPU占用低于平均水平,检测率远不突出,在我们的测试中分数较低,木马部分检出率特别的低,RAP测试的分数有不可预见的波动,而且结果没达到令人满意的地步。然而,核心测试结果不错,wild list检测和误报测试成绩良好,所以获得了此次的VB100奖项。最近两年的6次测试中,金山挂了4次,2次Linux系统测试缺考,4次挂掉的经历中有3次是最近6次测试中的结果。

本帖子中包含更多资源

您需要 登录 才可以下载或查看,没有帐号?快速注册

x
猪头无双
头像被屏蔽
 楼主| 发表于 2011-6-26 15:47:40 | 显示全部楼层
本帖最后由 猪头无双 于 2011-6-26 15:59 编辑

KIS standard A版

Kingsoft has routinely entered its ‘Standard’ product alongside the ‘Advanced’ one, and this time offers two separate variants on the theme (‘Standard-A’ and ‘Standard-B’), although as usual they are hard to tell apart. The install process is again fast and simple, and the interface clean, responsive and easy to navigate, with good stability allowing us to get through all the tests in good time. Scanning speeds and lag times closely matched those of the ‘Advanced’ edition, while RAM use was a little higher and CPU use a little lower, with impact on our activity set a little higher too. As expected, detection rates were even worse, with some truly terrible scores in the RAP sets – the proactive week score bizarrely some way better than the others. Despite this poor showing, the WildList set was covered fully and there were no issues in the clean sets, so a VB100 award is earned, just about. That makes for four passes and four fails in the last dozen tests, with four not entered; in the last year the product has had two passes and two fails, with two tests skipped.


金山按照惯例的送检标准版与高级版金山毒霸,这次标准版分成A\B两个版本,虽然我们也很难讲清这两个版本的区别。安装过程快速而简洁,界面干净,可信并且易于操作。稳定性很好,使得我们测试起来不费力。检测速度与延迟时间接近高级版,RAM占用稍高,CPU占用稍低,对我们的测试样本稍有影响,毫不意外的,检测率更差,在RAP测试中有了糟糕的分数——在前摄性测试中表现却怪诞的有些好的意外。除了这些糟糕表现,WildList 的测试结果完美,误报测试中表现不错,没有问题,所以获得此次VB100奖项。
猪头无双
头像被屏蔽
 楼主| 发表于 2011-6-26 16:00:02 | 显示全部楼层
本帖最后由 猪头无双 于 2011-6-28 09:07 编辑





KIS Standard-B版
There’s not much more to say about the third entry from Kingsoft, with very little to distinguish it from the other two in terms of user experience, with the install process and interface identical to the other two. Even the fine detail of the version information is unchanged. Scanning speeds were a little slower, and lag times a little higher in some cases, with more RAM consumed than either of the others, but fewer CPU cycles, while the impact on our activity suite was much the same. Detection rates were fairly abysmal, a fraction lower than the other ‘Standard’ edition, but the core certifi cation
requirements were met and a VB100 award is earned.

对金山没必要再说第三遍了,就用户体验来说,三者几乎没有区别。安装过程和界面和另外俩货没区别。甚至连版本信息的优点都一般无二。检测速度稍慢,延迟时间在某些样本的检测时稍长,RAM消耗比其他产品稍多,CPU占用却低,与样本的交互情况和其他产品相似。检测率是真的糟糕啊,分数比其他的版本还低,但是核心检测的分数达到标准,所以获得本次VB100奖

本帖子中包含更多资源

您需要 登录 才可以下载或查看,没有帐号?快速注册

x
猪头无双
头像被屏蔽
 楼主| 发表于 2011-6-26 16:12:38 | 显示全部楼层
感谢傻羽家的姐夫和tokthoo同学提供的报告 http://bbs.kafan.cn/thread-1014464-1-1.html 本翻译只是参考,不代表官方真实意思如此,有疑问以官方报告为准


打劫

评分

参与人数 2人气 +2 收起 理由
微微的笑 + 1 版区有你更精彩: )
jefffire + 1 山寨

查看全部评分

凛风冲击
发表于 2011-6-26 16:17:39 | 显示全部楼层
前排支持
wei581314
头像被屏蔽
发表于 2011-6-26 16:19:07 | 显示全部楼层
金山的三个版本都让vb那帮人无奈了,加上可牛 四个版本,真的是。。。。
maikeyin2010
发表于 2011-6-26 16:25:08 | 显示全部楼层
我就搞不明白,6月的都出来,为何现在在说四月的
您需要登录后才可以回帖 登录 | 快速注册

本版积分规则

手机版|杀毒软件|软件论坛| 卡饭论坛

Copyright © KaFan  KaFan.cn All Rights Reserved.

Powered by Discuz! X3.4( 沪ICP备2020031077号-2 ) GMT+8, 2025-2-5 03:48 , Processed in 0.131293 second(s), 18 queries .

卡饭网所发布的一切软件、样本、工具、文章等仅限用于学习和研究,不得将上述内容用于商业或者其他非法用途,否则产生的一切后果自负,本站信息来自网络,版权争议问题与本站无关,您必须在下载后的24小时之内从您的电脑中彻底删除上述信息,如有问题请通过邮件与我们联系。

快速回复 客服 返回顶部 返回列表