Simply yes, throughput and capacity are more a determining factor than processor speed on measuring the effectiveness of any system. The main reason is the processor is not usually the bottleneck in a system, its the I/O devices and BUS speeds that are the main culprits. This is why the focus in recent months across the industry have been the shift to using Solid State Drives (SSD) versus the hard disk drives (HDD) we all use presently. Secondary storage devices are generally the slowest I/O within a system so any improvements here trickle down to the rest of the system as a whole, making for more effective execution of processes and a more positive user experience. The actual throughput on a SSD versus a HDD is significantly higher, and access times are exponentially faster as well, which lead to faster response times and more efficient use of the processor. On the system board, the BUS speeds over the past few years have increased almost to the point that the BUS speed between the CPU and the primary storage is almost matched, which is a good thing. Once the speeds of the main memory BUS matches that of the CPU there is no latency experienced when swapping process states, which results in higher efficiency within the system, and better response to the users requests. At this point, the bottle neck goes back to the I/O devices. The issue with SSD right now is its expense, it has a much higher per byte cost than a traditional HDD, so dive capacities are lower. For example, (as of this post) a major online retailer sells a 500GB 7200rpm SATAIII HDD for $80, and on the same site a 512GB SATAIII SSD is $585. Plus, even with 6Gbps throughput, secondary storage still does not function at the same speed as the processor so again, the CPU ends up sitting idle more than it should, but less with an SSD. Modern multi-core processors can carry the general users through the next couple generations of innovation, and SSD's have already started improving those same users experienced with reduced latency and large performance gains. The next stage is to reduce costs so that SSD capacities increase while the price goes down, much like HDD has done over the past few decades.
Unless you are a systems engineer, gamer, or a hardcore Geek, most people do not consider the system throughput to be a potential bottleneck because all they hear or see are buzz words like "gigahertz", "multi-core", "terabytes", "BluRay", and "HD" or "3D" graphics. What most do not understand is that the throughput between the main memory, CPU, North & South bridges, and Video Processors should be as fast as possible and ideally match. For example, if the CPU core speed is rated at 2.0GHz, then in theory the memory clock speed should also be 2.0GHz, and the BUS pathways between the Bridges and Video processors should also be 2.0GHz, and so on. This produces an optimal operating environment at the hardware level, but is not usually the case because all hardware is made to function at different speeds with different capacities to appeal to different price point from cheap but functional, to expensive but really good. Once you bring I/O devices into the mix, the processor usually ends up in the idle state more often than while awaiting interrupts.
With SSD, durability is an important aspect. Since there are no moving parts, any shock given to the drive is less likely to cause severe data loss or even worse, drive damage. This is why our smartphones are able to be dropped from a couple to a few feet off the ground and usually survive without more than some cosmetic damage, because of solid state memory. If it had moving parts, those would surely get damaged on the first drop, even if only from inches above the ground. Longevity concerns is what made companies like OtterBox release ballistic shock resistant cases for mobile devices. People get attached to their devices, and it is more economical to spend $30-$50 to protect the expensive devices with a case like these, rather than have to deal with the hassle and expense of obtaining a new replacement at retail and migrating data.
What do you think? Am I on point here?
~Geek
Tuesday, April 24, 2012
What security issues must be resolved now which cannot wait for the next version of Windows® to arrive?
A recent discussion in my Operating Systems class prompted an interesting response on my part to what the main security issues afflicting Microsoft's systems are. Here's my 2 cents:
The most common threats to Microsoft systems on the consumer and business side is through Internet Explorer and Microsoft Office vulnerabilities. When reviewing this months Microsoft Security Bulletin, there are critical updates to patch various vulnerabilities across all releases of the Windows OS for Internet Explorer versions 6 through 9, and Microsoft Office version 2003 through 2010 SP1. The not-so-seen critical updates are for the .NET Framework, which supports interactive sessions with users through browser windows, are related to the Internet Explorer issues of an attacker being able to execute remote code by having a user visit a spoofed website and/or clicking on a link/banner that contains the malformed code. What I find interesting in this months report is that the majority of the notices that Microsoft put out have to do with the same vulnerability, namely the ability to allow an attacker to remotely execute code through a browser session. As many other of my classmates have mentioned in their posts this week, this is part of the evolution of operating system software in particular. Microsoft spends millions of dollars and thousands of man hours developing, and hardening, their kernels. With a user-base of Microsoft software reported at over 1 billion users world-wide, there are only so many scenarios that can be built into software testing labs making it impossible to correct every problem before the product is released to manufacture. Plus, many of these users have advanced knowledge of systems and software who can find vulnerabilities under scenarios impossible to test for under lab conditions. A lot of these users report those vulnerabilities to the development team so a patch can be created and released to the masses, others are not found out until a virus or some malware is put into the wild to exploit them. Security companies like Symantec and McAfee use intuitive software to track these attacks and inform the developers of the issues while generating their own patch, or cure as it were, to the impeding exploit. In the cases of major global attacks, Microsoft works with the security companies, and government entities, to create a cohesive solution to not only cure infected systems, but also protect unexposed systems from future exploitation.
With the great investment it takes to create a major operating system release, patching makes the best sense for providing important updates to a system, without disrupting the flow of user adoption and education on best practices. In some instances, such as when a system issue deals with deadlocked processes, a workaround could be implemented, such as adding a forced delay in processing time with processes competing for the same resource. This is why some of the Windows Updates sent out are related to changing registry values or adding a batch file to affect a process workaround while the development team evaluates whether this is an isolated or potentially wide-spread issue. If they determine that the issues can be replicated over a majority of the systems, then they will create a patch to permanently change the process and resolve the condition, otherwise they will leave the workaround in place for the limited cases that do come up under unusual scenarios.
What do you think? Am I on point, or way off base?
~Geek
The most common threats to Microsoft systems on the consumer and business side is through Internet Explorer and Microsoft Office vulnerabilities. When reviewing this months Microsoft Security Bulletin, there are critical updates to patch various vulnerabilities across all releases of the Windows OS for Internet Explorer versions 6 through 9, and Microsoft Office version 2003 through 2010 SP1. The not-so-seen critical updates are for the .NET Framework, which supports interactive sessions with users through browser windows, are related to the Internet Explorer issues of an attacker being able to execute remote code by having a user visit a spoofed website and/or clicking on a link/banner that contains the malformed code. What I find interesting in this months report is that the majority of the notices that Microsoft put out have to do with the same vulnerability, namely the ability to allow an attacker to remotely execute code through a browser session. As many other of my classmates have mentioned in their posts this week, this is part of the evolution of operating system software in particular. Microsoft spends millions of dollars and thousands of man hours developing, and hardening, their kernels. With a user-base of Microsoft software reported at over 1 billion users world-wide, there are only so many scenarios that can be built into software testing labs making it impossible to correct every problem before the product is released to manufacture. Plus, many of these users have advanced knowledge of systems and software who can find vulnerabilities under scenarios impossible to test for under lab conditions. A lot of these users report those vulnerabilities to the development team so a patch can be created and released to the masses, others are not found out until a virus or some malware is put into the wild to exploit them. Security companies like Symantec and McAfee use intuitive software to track these attacks and inform the developers of the issues while generating their own patch, or cure as it were, to the impeding exploit. In the cases of major global attacks, Microsoft works with the security companies, and government entities, to create a cohesive solution to not only cure infected systems, but also protect unexposed systems from future exploitation.
With the great investment it takes to create a major operating system release, patching makes the best sense for providing important updates to a system, without disrupting the flow of user adoption and education on best practices. In some instances, such as when a system issue deals with deadlocked processes, a workaround could be implemented, such as adding a forced delay in processing time with processes competing for the same resource. This is why some of the Windows Updates sent out are related to changing registry values or adding a batch file to affect a process workaround while the development team evaluates whether this is an isolated or potentially wide-spread issue. If they determine that the issues can be replicated over a majority of the systems, then they will create a patch to permanently change the process and resolve the condition, otherwise they will leave the workaround in place for the limited cases that do come up under unusual scenarios.
What do you think? Am I on point, or way off base?
~Geek
Subscribe to:
Posts (Atom)