本帖最后由 猪头无双 于 2011-6-26 16:57 编辑
Setting up Windows XP has become such a familiar and oft-repeated task that it requires very little effort these days. In fact, we simply recycled bare machine images from the last run on the platform a year ago, tweaking and adjusting them a little to make them more at home on our current hardware and network set-up, and re-recording the snapshots ready to start testing. As usual, no updates beyond the latest service pack were included, and additional software was kept to a minimum, with only some network drivers and a few basic tools such as archivers文档, document viewers and so on added to the basic operating system. With the test machines ready good and early, test sets were compiled as early as possible too. The WildList set was synchronized with the January 2011 issue of theWildList, released a few days before the test set deadline of 16 February. This meant a few new additions to the core certification set, the bulk of which were simple autorun worms and the like. Most interesting to us were a pair of new W32/Virut strains, which promised to tax the products, and as usual our automated replication system churned out several thousand confirmed working samples to add into the mix. The deadline for product submission was 23 February, and as usual our RAP sets were built around that date, with three sets compiled from samples first seen in each of the three weeks before that date, and a fourth set from samples seen in the week that followed. We also put together entirely new sets of trojans, worms and bots, all gathered in the period between the closing of the test sets for the last comparative and the start of this month’s RAP period. In total, after verification and classification to exclude less prevalent items, we included around 40,000 samples in the trojans set, 20,000 in the set of worms and bots, and a weekly average of 20,000 in the RAP sets. The clean set saw a fairly substantial expansion, focusing on the sort of software most commonly used on home desktops. Music and video players, games and entertainment utilities dominated the extra 100,000 or so files added this month, while the retirement of some older and less relevant items from the set kept it at just under half a million unique files, weighing in at a hefty 125GB. Some plans to revamp改进 our speed sets were put on hold and those sets were left pretty much unchanged from the last few tests. However, a new performance test was put together, using samples once again selected for their appropriateness to the average home desktop situation. This new test was designed to reproduce a simple set of standard file operations, and by measuring how long they took to perform and what resources were used, to reflect the impact of security solutions on everyday activities. We selected at random several hundred music, video and still picture files, of various types and sizes, and placed them on a dedicated web server that was visible to the test machines. During the test, these files were downloaded, both individually and as simple zip archives, moved from one place to another, copied back again, extracted from archives and compressed into archives, then deleted. The time taken to complete these activities, as well as the amount of RAM and CPU time used during them, was measured and compared with baselines taken on unprotected systems. As with all our performance tests, each measure was taken several times and averaged, and care was taken to avoid compromising the data – for example, the download stage was run on only one test machine at a time to avoid possible network latency issues. We hope to expand on this selection of activities in future tests, possibly refining the selection of samples to reflect the platforms used in each comparative比较, and perhaps also recording the data with greater granularity. We had also hoped to run some trials of another new line of tests, looking at how well products handle the very latest threats and breaking somewhat with VB100 tradition by allowing both online updating and access to online resources such as real-time ‘cloud’ lookup systems. However, when the deadline day arrived and we were swamped with entrants, it was clear that we would not have the time to dedicate to this new set of tests, so they were put on hold until next time. The final tally came in at 69 products – breaking all previous records once again. Several of these were entirely new names (indeed, a couple were unknown to the lab team until the deadline day itself). Meanwhile, all the regulars seemed to be present and correct, including a couple of big names that had been missing from the last few tests. With such a monster task ahead of us, there was not much we could do but get cracking, as usual crossing all available digits and praying to all available deities for as little grief as possible.
对我们来说,设置XP系统是一项没有多少难度的,重复多次的,熟悉的工作。实际上,我们用的是一年前同一个平台测试时剩下的镜像,并对镜像做了微调,并使其更搭配我们当下家用电脑的硬件配置和网络设置,并备份了快照。按照惯例,除了SP3的补丁包之外,我们没对系统进行升级,附加的程序也仅仅是一些文档阅读器、网络驱动神马的,以保证附加程序最少。随着系统加载的完毕,测试的设定工作也随之进行。Wild List收集截止至2011年1月,发布日期为2月16号之前。这意味着这次样本中有了几个新面孔:包括一群简单的自动运行蠕虫、一对令我们感兴趣的W32/Virut变种(我们的自动生成系统根据这对变种自动生成了几千个有效的活体样本)。参赛厂商的样本收集截止日期是2月23号,我们的RAP测试也随之展开,在包括23号在内的头三周中,收集首次出现的样本,每周修改一次设置;第四周是23号之后的下一周,测试厂商停止收集样本入库,全靠自身技术对样本做出反应。我们也收集了自从上次测试结束开始,到本月RAP测试开始前的一段时间内出现的木马、蠕虫和bots.总之,经过确认和剔除某些非主流的样本之后,我们得到了4W个左右的木马样本、2W个左右的蠕虫样本,并且在RAP测试中每周大概也有2W左右的样本。误报测试中的样本也对日常家用系统中出现的软件做了重点采样,包括音乐、视频播放器、游戏程序等大约10W个样本,同时剔除了50W过时的样本,占体积125GB。一些针对速度测试的新设定在本次测试中被暂时叫停,或许他们会出现在未来的测试中。然而,我们本次增加了系统性能测试,并再次选用了一些家用系统中常见的程序作为测试样本。这项测试的目的是再现一个简单的、标准化的测试平台。包括测试运行程序响应多长时间;占用了多少实体/虚拟内存;和杀软的适应性互动如何等等。我们随意选择了几千个不同类型,不同体积的视频、音频、图片文件,储存在一个测试机可见的专用的网络服务器上。测试中,这些文件将按照zip格式下载、解压、移动、复制、打包、删除。我们将把测试机上的RAM/CPU占用和所花费的时间总和与标准机上同样进行这些行为所花费的时间和占用进行比较。我们的全平台测试将进行多次,并取平均值,以免造成不公平的测试结果,我们期望这项测试能够常态化。同时我们也希望进行新的测试,比如在联网状态下,一款杀软的实时处理最新威胁的能力,换句话说,我们想打破惯例,测试下云。然而,随着截止日期的临近,我们忙着应付参赛厂商,所以没有更多的时间来测试云,希望下次能得偿所愿。本次一共有69家厂商参赛,有不少冷面孔是头一次听说。同时,某些过去未露面的国际大牌和一些常客也齐聚本次测试,我们期待他们能在本次测试中发挥水平,获得有优异的表现。 |