查看: 7226|回复: 29
收起左侧

【原创感受】从卡巴斯基对云安全的概述看未来【再次重申不是翻译是感受】

[复制链接]
syfwxmh
发表于 2009-4-25 07:45:03 | 显示全部楼层 |阅读模式
这是一篇于24日写在kaspersky lab viruslist.com的文章(英文):
http://www.viruslist.com/en/analysis?pubid=204792059
附注:以下是原文(英文)
IntroductionEvery era has its buzzwords and the IT sector is no different: "multimedia" in the 80s, "interactive" in the 90s, and ”web 2.0” in the last few years. And just when everyone’s starting to get comfortable with the most recent piece of terminology, along comes another one. The latest phrase du jour is in-the-cloud and, just like clouds themselves, it seems to be a rather nebulous concept. This article aims to clear up some of the confusion, as well as providing an explanation of the differences between cloud computing and in-the-cloud security.
How do clouds come into it?If you've ever seen a network diagram that includes the Internet, you might have noticed that the standard symbol for the Internet is a cloud. This makes sense; you can’t see inside a cloud, and you can’t see exactly which computers are currently "in" the Internet. If there are clouds in the sky, you can guess it’s probably going to rain – you don’t need to know what’s happening inside the cloud. In the case of the Internet “cloud” all you need to know it is that’s it there, and that you can connect to it. You don’t need to know exactly what’s going on inside.
Cloud computingThe forerunner of cloud computingCloud computing existed long before Windows, MacOS and Linux began the march on users’ systems. However, back then the concept of cloud computing had a different name; it was referred to as "mainframe + terminal". The mainframe was a very powerful server where all programs were run and all data was stored, while the terminal was a simple system that was only used to connect to the mainframe. Of course such systems had to have both a fast (and therefore expensive) mainframe and the appropriate network infrastructure. This was something that IBMs PC and home computers didn't need – they were designed to provide everything in one machine and fairly quickly rendered the concept of mainframes obsolete.
Companies like Hewlett-Packard, Sun and Citrix have tried to keep the idea alive, however, replacing the terminal with what is now called a thin client, and the big mainframes with powerful out-of-the-box servers. These servers generally use standard PC technology (only with faster and more expensive processors, more RAM, more disk space than the average desktop computer) .Nevertheless, the whole concept is still dependent on having a powerful server and a fast network connection which is why such systems haven’t adopted by home users until now – they simply didn’t have access to such resources. However, these days it’s quite common for private households to have high-speed Internet access and download capability of several MB per second.
The infrastructure is almost ready Although the mainframe and thin client concept can be seen as the forerunner of cloud computing, there is a difference. A company that deploys thin client technology usually has to buy its own server, provide a hosting location, electricity etc. In the case of cloud computing, though, all the hardware is bought by an in-the-cloud-provider, who then rents the available capacity to anyone who needs it. The interesting point is that customers don't rent a server, but just a certain amount of memory or processor cycles. The customer doesn’t need to know if everything is hosted on a single computer or spread over several machines. Providers can even substitute a different machine for the hardware where a customer’s software is running without any noticeable effects, thanks to technologies designed to enable this kind of hot-swapping. And this is what the cloud is really about - as a customer you don't have to worry about details; if you need more memory or more CPU power, you simply need to click the appropriate button to request these services.
One of the first major players in the area of cloud computing was Amazon which introduced the "Amazon Elastic Compute Cloud" concept in 2008. Recently Canonical - the company behind Ubuntu Linux - announced that Amazon's service will be integrated into the upcoming Ubuntu 9.10 Linux next autumn. Other companies like Microsoft and Google have also entered the market, and will fight for their share of potential profits. There are also thousands of hosting companies which offer "virtual servers" to small businesses or individuals who need a cheap web server and this can be seen as a type of in-the-cloud service.
So the market is there; it’s going to grow fast and it will be the job of service providers to offer hardware that meets customer needs. Setting up these machines will be one of the biggest challenges: as soon as there’s significant take up for in-the-cloud services, more and more server centers will be needed. Such centers have to be geographically close to the customers because every kilometer increases the delay between client and server. Although the delay is minimal, high performance users such as gamers find even a delay of 200 milliseconds unacceptable as it detracts from the experience of real-time gaming. Once individuals become interested in using in-the-cloud" services, providers will have to buy a lot of extra hardware to meet the additional demand. Within the next decade or so there will be server centers in every town, with some possibly being located in multi-occupancy buildings which have 100+ inhabitants.
The benefits Though most in-the-cloud providers currently target business customers, it's likely that individuals will be the ones to make the concept a success by using it in large numbers. Businesses have dedicated IT staff, but for individuals, owning a computer can mean stress. First you have to buy a computer, something which sounds easier than it is – should you choose a laptop, which is highly portable, or does a cheaper and often faster desktop better meet your needs? In the world of cloud computing you can have both; you can buy a cheap thin client style laptop (which you can connect to a monitor and keyboard if you want) for less than 300 Euros. Then all you need to do is connect to your in-the-cloud-provider and enjoy as much performance and memory as you like (or can afford). Two years later, when normally you’d have to replace your out-of-date laptop, you can still use your thin client, because it’s the provider who offers the performance rather than the machine itself.
The stresses associated with owning a computer aren’t limited to hardware headaches; updating the operating system and applications, as well as patching vulnerabilities, can be a challenging task. Cloud computing takes care of all of these issues, making home computing cheaper, more secure, and more reliable.
Cloud computing will also benefit the content industry. In the past a number of methods have been used to prevent the illegal copying of music and movies, but none of these methods are problem-free. There have been cases of copy protected CDs not working on some CD players and Sony’s efforts to protect content resulted in a media scandal and withdrawal of the technology used. More and more MP3 shops are moving away from DRM protected material and offering unprotected music files instead. Cloud computing, however, will offer DRM a second lease of life, with content producers offering movies, games and music directly to the consumer. Such content will be designed to run within a cloud computing system, and it will require more time and effort to make unsanctioned copies of movies and music delivered in this way. Ultimately, this will result in fewer illegal copies and higher profits for the producers.
The risksThere are clear benefits to cloud computing, but there are also risks. Hardly a day goes by without a report of data being leaked or lost. Using in-the-cloud services means placing unprecedented trust in the provider. Is there any company you trust so much that you’re willing to give it full access not only to your emails, but also to all your private documents, bank account details, passwords, chat logs and photos of yourself and your family? Even if you trust the company, there’s no guarantee that your information won’t fall into the wrong hands. The key point: data leakage isn’t an issue that’s exclusive to cloud computing, but providers will have access to all of your data, rather than just select parts of it. So if a leak does occur, the consequences could be very wide-ranging.
Will the risks drive cloud computing out of business? This seems very unlikely, as cloud computing is convenient for users and profitable for providers. A complete refusal to use in-the-cloud services would lead to a company becoming just as isolated (and unable to do business) as a refusal to use email would do today. Rather than boycotting this technology, a more productive approach will involve the creation of new legislation and strict guidelines for providers, as well as technologies that will make it (almost) impossible for a provider’s employees to trawl customer data. At the moment, every company which wants to offer in-the-cloud services is free to do so, but the picture will be very different in ten years time. Providers will have to abide by standards if they want to offer such services.
The introduction of standards will also attract malware writers and hackers; this has been amply demonstrated by the standardization of PCs, with the overwhelming majority of them running Windows. Once cloud computing reaches critical mass, there will probably be a few highly specialized hackers who know how to break into cloud systems in order to steal or manipulate data, and they will be able to make a lot of money. There will also be scammers who don't have any particular interest in technology but they’ll use today’s tricks such as 419 emails to get their hands ’on victims’ money. And there will be cybercriminals creating and using Trojans, worms and other malware, and security companies protecting their customers from these threats. Overall, there won’t be that much change, except for the fact that everyone – users, providers, and cybercriminals alike – will be working in the cloud.
In-the cloud securityThe main principle of cloud computing is being able to use computers resources without having physical access to the computer itself. In-the-cloud security is completely different: this is the use of outsourced security services that are offered in the cloud but the operating system is still running locally on the PC on your desk. In-the-cloud security comes in different flavors: for instance, Kaspersky Lab offers Kaspersky Hosted Security Services, which provide anti-spam and anti-malware services by filtering traffic for harmful content before it reaches the end user. The company’s personal products also offer in-the-cloud security in the form of the Kaspersky Security Network. The next part of this paper focuses specifically on the latter type of security service offered in desktop solutions.
Back in February 2006, Bruce Schneier blogged about security in the cloud. He wasn’t the only one to address the topic, and, although there was a lot of discussion as to how the concept should be implemented, it was generally agreed that implementation was a must, rather than an option. So why has it taken the AV industry more than two years to start implementing this technology in its products?
A question of necessityImplementing new technologies not only takes time and money, but often has disadvantages. The most obvious disadvantage related to in-the-cloud services - whether cloud computing or in-the-cloud security - is that it’s essential to be online in order to enjoy the benefits. As long as existing technologies were able to provide protection against the latest malicious programs without the user having to be online, there was no need for change. As always, however, the changing threat landscape has led to changes in the antivirus industry.

The traditional way of detecting malicious software is by using so-called signatures. A signature is like a fingerprint - if it matches a piece of malware, the malware will be detected. However, there's a difference between human fingerprints and signatures – a fingerprint only matches one individual, whereas a good signature doesn’t just identify a single unique file, but several modifications of that file. High quality signatures don’t only mean higher detection rates but also a decrease in the number of signatures needed, and, consequently, lower memory consumption. That is why the number of signatures provided by AV companies is usually much lower than the total number of files known to be malicious.
Clearly, good signatures can help keep signature databases small, but this doesn’t tackle the fundamental problem: the more malicious files there are, the more signatures will be needed. And as the graph above shows, the number of signatures has increased dramatically over the past few years in response to the explosion in the number of malicious programs.
An increased number of signatures doesn’t only result in higher memory consumption and additional download traffic, but also a decrease in scan performance. At the time of writing, the Kaspersky Lab signature databases used in personal products were 45 MB in size. If the trend illustrated above continues (and there's no reason to believe it won’t), then these databases will swell to 1000 MB+ within the next three to four years. This is more memory than some computers have RAM; databases of this size would leave no space for an operating system, browser or games, and using a computer resources just to scan the machine itself, but not for work or play, isn’t a satisfactory solution.
So what's the answer? Deleting old database records relating to MS-DOS malware wouldn't help; the amount of memory gained would be smaller that the daily increase in new signatures. Although, as mentioned above, more efficient signatures can help, this is fighting the symptoms rather than the disease, and it won’t ultimately stop the increase in the number of database records.
In 2007 some AV companies realized that in-the-cloud security would be the only way out of the situation. The need to be online at all times was still seen as a drawback when compared to desktop security solutions, but ultimately, the benefits of this technology outweigh the disadvantages.
The advantages of in-the-cloud securityThe whole idea behind in-the-cloud security as Kaspersky Lab defines it is that a potentially malicious file or website is checked online instead of being checked against local signatures stored on the computer itself. The easiest way to do this is by calculating the checksum (i.e. the digital fingerprint) of a file and then asking a designated server if a file with this checksum has already been identified as malicious. If the answer is "yes", the user is shown a warning message and the malware is quarantined.
From a user's point of view there’s no significant difference between this approach and the old approach, apart from improved computer performance. It doesn’t take a lot of processor resources to create a checksum for a file, which means doing this is several times faster than performing a complex signature based scan. There are also a number of other advantages which a user might not initially notice, but will still benefit from:
Lower memory consumption and smaller download footprint. As mentioned above, in a few years time classic signature databases will be pushing the limits of what users are willing to accept in terms of size. In-the-cloud solutions solve this problem easily: all the ”fingerprints” are stored on the servers belonging to the AV company. The only thing stored on the end user’s machine is the AV software itself plus some data used for caching purposes. The server is only contacted if a new, as yet unidentified program is found on the local disk. If the user doesn’t install any new programs, there’s no need to download any new data. This is in complete contrast to today’s situation where signatures have to be constantly updated in case a new program (which might be malicious) is installed.
Better response times. Response times have always been a hot topic for the AV industry. It’s one thing to make a new signature available, but if this signature arrives several hours after a user opens an infected attachment, it's often too late: the computer is probably already part of a botnet and will have downloaded additional malicious components for which no detection exists yet. This is why Kaspersky Lab began delivering almost hourly updates when many AV companies were still adhering to a daily update cycle. Nevertheless, the time lapse between the appearance of a new virus and the release of a signature could still be an hour or more. Proactive detection methods and client-side emulation technologies can compensate for this gap, but the problem persists. The impact of in-the-cloud security is clear; as the signature check is conducted on demand and in real time, the response time is significantly better. As soon as a file is identified by an analyst as being malicious, this information is made available to the client, bringing the response time down to minutes or even seconds.
It’s not just signatures for Trojans, viruses and worms that can be transmitted using in-the-cloud technology, but almost everything that forms a part of regular signature updates: URLs of dangerous websites, titles and keywords that appear in the latest spam mailings, and also complete program profiles that can be used by host intrusion prevention systems (HIPS) such as the one in Kaspersky Internet Security 2009. This technology isn’t restricted to PCs either. It also makes sense to use such technology to protect mobile devices, particularly as smartphones don't have as much RAM as a PC, and therefore every byte counts. Most AV solutions for mobile phones focus on detecting mobile threats, as detecting all malware that targets Windows XP and Vista would consume too many resources. In-the-cloud technology could make this issue a thing of the past.
Two-way communicationIt's obvious that in-the-cloud systems can help tell a customer if a file on his machine is infected or not - the client machine queries, the server answers. However, depending on how the technology is implemented, the process can also work the other way round, with customers helping the AV company to identify and detect new threats. Supposing a file is analyzed on the client machine using emulation or proactive defense technologies. The conclusion is that the file is malicious. The file can then be uploaded for further investigation by the AV company’s analysts. Of course, this means the customer has to share the file itself, which s/he may not be willing to do. However, there's another way of doing this: instead of sending a binary file, the client software could simply send the fingerprint of the file together with details (such as file size, threat classification etc.) from the modules which performed the analysis. If a new worm is spreading fast, the analysts at the AV company see there’s a new file which was flagged as suspicious and which suddenly appeared on thousands of computers. It’s then up to the analysts to determine if this is a new malicious file. If they come to the conclusion that there’s a genuine threat, adding detection is easy: the fingerprint for the file already exists and simply has to be moved into the detection database to which client computers send their requests.
There’s no such thing as a free lunchDespite all the advantages of in-the-cloud security, there are also some drawbacks. The example above shows how adding detection on the basis of statistical monitoring can be an effective way of combating sudden outbreaks. However, it dramatically increases the risk of false positives. Suppose a new version of a popular shareware program is released; news will spread quickly, and very soon lots of people will be downloading the software. If the program affects system files, isn’t signed, and perhaps downloads other executables in order to update itself, it’s likely to get flagged as malicious by an automated in-the-cloud system. A few second later, this would result in thousands of false positives around the world. Of course, this wouldn’t happen if a human analyst had looked at the program, but this would take time, negating the potential benefit of rapid detection. Although it would be possible to fix the false positive in a matter of seconds (in contrast to a false positive in a classic signature database which remains in place until the next update is downloaded) there would still be negative consequences. People don’t like false positives, and if an individual notices that in-the-cloud security leads to increased false alarms, s/he is likely to disable the feature or move to another AV company which still employs the classic approach. To prevent this, AV companies need to set up and maintain their own collections of files which are known to be clean. If a new patch or program is released, the AV company has to analyze and whitelist it very quickly before their customers start downloading it.
Moving into the cloud therefore means a lot of additional work for AV vendors. Apart from actively maintaining the collection of clean files, the company’s servers have to be absolutely stable 24 hours a day. Of course, this has always been the expectation of customers, as offline servers can't deliver updates. Nonetheless, the classic approach will work with signatures that are several hours old even though detection rates will be lower. The cloud approach differs here; customers are left unprotected during server downtime, as the whole concept is based around on-demand and real time communication. In the case of server downtime, heuristics would have to be used in combination with powerful HIPS technology in order to ensure customers remain protected.
What does the future hold?Kaspersky Lab was among the pioneers of in-the-cloud security with the release of KIS 2009 and the rollout of the associated Kaspersky Security Network. Many security companies have now started to implement the in-the-cloud approach in their products but it should be stressed that the industry overall is still at the initial stages of harnessing the full power of this technology. The situation is comparable to the situation in the world of cars: although electric cars will, in the long run, come to replace petrol and diesel cars, at the moment most of the cars which are advertised as electric are actually hybrids. The IT world is usually quicker to innovate than other industries, but nevertheless, it’s likely to be two or three years before in-the-cloud security fully replaces the signature based detection methods used today.
ConclusionBy now, the difference between cloud computing and in-the-cloud security should be clear. It will be a couple of years before cloud computing really takes off as companies will have to get used to the idea of sharing all of their data with service providers.
AV products which implement in-the-cloud technology have already been released and there seems little doubt that by the end of the year, this technology will be widely accepted. As time goes on, the two approaches will merge, with individuals and organizations using cloud computers protected by in-the-cloud security services. Once this happens, the Internet will be as essential to daily activity as electricity is today.
注意:本文为原创,转载请注明 梦幻发布 谢谢!
题外话:想练练英语阅读的童鞋这是个不错的地方,很有意思。

   
以下为原创:(这是感受不是翻译!!!!!)
   文章重点阐述了云计算与云安全之间的区别,并对卡巴斯基实验室如何在云安全上运作做了阐释。
  通过阅读我们不难看出当今的安全威胁是非常严峻的,一年比一年多得恶意威胁,使病毒库已爆发似的速度增速,现在卡巴斯基实验室的产品病毒库已经达到了45mb(来源于文章数据),如果按照这样计算,各家厂商的病毒库将于三年至四年的时间突破1000mb,这样庞大的病毒库对于用户来说是非常不便利的,而且对于扫描速度以及性能影响甚大,这也间接阐释了为什么新版本的体积越来越大的原因。所以卡巴斯基实验室于2007年开始了云计算的研究,借鉴ibm等先驱者的一些观念构建了云安全体系,这也就是我们在卡巴斯基2009版看到的kaspersky security network。此安全系统着利于已最快的相应时间处理垃圾邮件,恶意程序,使用户在下载阅览之前被服务器所拦截处理。
  而在解决病毒库日益增大的问题上,卡巴斯基实验室致力于一种新的思想(与mcafee的月亮女神类似)—云端扫描。这种扫描技术可以快速响应病毒的侵袭,并可以脱离开本地病毒库从而缩小体积,提高性能。而本地病毒库则充当一个处理误报的角色。用过mcafee月亮女神的都知道,其误报起初是很严重的,但是经过一段时间的磨合,误报已经降低,这并不是因为技术上的改进,而是本地病毒库中加入了另一种特征码,也就是我们曾经称之为的白名单。
  这种白名单技术,是避免误报的最佳手段,遭遇过卡巴误报的都知道,2009版的从来没有误报过系统文件(之前出现误报的版本为7系列)。这是因为卡巴斯基实验室已经将所有系统文件的fingerprint加入到了白名单中。即使病毒库中出现了错误的定义但是并不会报警(注释:必须是原程序,经过压缩或大包处理的依然会报,所以阅读者不需要为此拍砖,我想谁也不会把系统文件打包,除非你想让系统因为缺少文件而停留在bootmgr或ntldr的引导界面上)。而这恐怕就是2011版新架构卡巴致力于cloud computing的原因。虽然这种技术比mcafee滞后放出但是那时的成熟度会更高细节处理会更好,因为卡巴斯基的云安全研发和mcafee基本是同一时间。
  现在的卡巴斯基云安全着立于以下几个方面(已经开放的):数据统计,恶意网址拦截,hips拦截,垃圾邮件收集等。这些方面都是现在2009版已经可以体验到的技术。
  相信卡巴斯基会通过不断完善的技术给予用户最大的安全保护!



[ 本帖最后由 syfwxmh 于 2009-4-25 21:01 编辑 ]

评分

参与人数 3人气 +3 收起 理由
一刀大师 + 1 不懂英文,感谢分享!
yangwanhui + 1 病毒库确实越来越大了 这次506打包都50M多
孤独的我 + 1 感谢提供分享 :)

查看全部评分

rzqevergo
头像被屏蔽
发表于 2009-4-25 08:28:14 | 显示全部楼层
楼主提供信息总是很及时 很有新意
lm91128
头像被屏蔽
发表于 2009-4-25 08:41:31 | 显示全部楼层
英文的看不懂
syfwxmh
 楼主| 发表于 2009-4-25 08:49:45 | 显示全部楼层

回复 3楼 lm91128 的帖子

其实没有什么太难的单词~~

扫盲一下:
fingerprint 指纹
concept 观念
mainframe 大型计算机
cloud computing 云计算
kaspersky security network卡巴斯基安全网络
standardization 标准
………………………………

其实有些单词可以猜出来的
syfwxmh
 楼主| 发表于 2009-4-25 08:51:22 | 显示全部楼层
加入原文
sharkkong
头像被屏蔽
发表于 2009-4-25 10:37:50 | 显示全部楼层
2009的系统文件白名单确实发挥了不小的功效,前段时间卡7用户发生过误报系统文件,而2009用户虽然也有报警(没有弹框),但是因为处在白名单之中,默认就排除了,用户如果不去仔细看日志,甚至都不会知道发生过误报

评分

参与人数 1人气 +1 收起 理由
yangwanhui + 1 哈哈 这么早 没休息吗今天?

查看全部评分

414447992
发表于 2009-4-25 10:45:30 | 显示全部楼层
单是看原文就得非好长时间
yangwanhui
发表于 2009-4-25 11:03:58 | 显示全部楼层
白名单 还是大大的好处 即将出来新系统 又要出来新卡巴版本
到时候不知道 效果是什么样子的 很是期待……2010 ing
胜利十一人
发表于 2009-4-25 12:43:16 | 显示全部楼层
看来英语很有用啊。
chengdu202
发表于 2009-4-25 13:18:18 | 显示全部楼层
英文那么长,中文那么短呢?
您需要登录后才可以回帖 登录 | 快速注册

本版积分规则

手机版|杀毒软件|软件论坛| 卡饭论坛

Copyright © KaFan  KaFan.cn All Rights Reserved.

Powered by Discuz! X3.4( 沪ICP备2020031077号-2 ) GMT+8, 2024-5-5 17:01 , Processed in 0.148081 second(s), 17 queries .

卡饭网所发布的一切软件、样本、工具、文章等仅限用于学习和研究,不得将上述内容用于商业或者其他非法用途,否则产生的一切后果自负,本站信息来自网络,版权争议问题与本站无关,您必须在下载后的24小时之内从您的电脑中彻底删除上述信息,如有问题请通过邮件与我们联系。

快速回复 客服 返回顶部 返回列表