Monday, June 8, 2015

Competitive Forces that Shape IT Strategy in Business

Competitive Forces

The introduction of information technology (IT) systems has changed how companies conduct business, and also how they compete in their respective markets. There are a number of risks and advantages to implementing an IT system, which can be managed with the correct mix of technologies as an integrated platform. The purpose of this paper is to review the competitive forces that shape IT strategy in business.

IT Risk to Competitive Advantage

One of the primary risks to a company’s competitive advantages is systems availability. The computer has become a key tool in the art of conducting business, which means that they must be reliable and provide the resources necessary for a person to meet or exceed the expectation of their role. From an IT perspective, system failure is something that should be proactively monitored across the enterprise so that downtime is as minimal as possible in nearly all potential scenarios. The loss of revenues from being offline can be multiples higher with companies that provide 24/7 services to their clients, where revenues are calculated by the minute.

Another risk to competitive advantage is the disclosure of sensitive or proprietary data that is the source of the company’s advantage. A sales agencies value to a manufacturer, for example, derives from its industry contacts and distribution network. Therefore, their contact databases become their most valuable asset. A risk is espionage, an insider could provide these details to a competitor, or to a manufacturer looking to cause disruption in the market by selling online or via direct sales. Another risk in the disclosure of sensitive data that represents customer’s private information, including contact information and financial transaction data. For example, the healthcare industry has HIPPA regulations which stipulate what data is to be protected, how it is protected, and under what circumstances it can be disseminated. These are regulations put in place to protect the consumer, and stabilize competition between market providers.

A third area where IT represents a risk to a company’s competitive advantage is ineffective IT governance. According to Gartner (2013), “IT governance is defined as the processes that ensure the effective and efficient use of IT in enabling an organization to achieve its goals.” Throughout the past 30 years, companies struggled to define the role of IT as it related to business goals, many still do. IT was seen as a necessary evil, a means to an end, or another tool to automate certain tasks within a company, but not a way to achieve strategic advantage over a competitor or a method to dominate a market. As business goals evolve, formal IT governance will ensure that resource allocations remain dynamic and scalable to meet these changing needs.

A fourth risk area would be slow adoption, in that a company does respond to the challenges presented by direct competitors by updating or upgrading its technological capabilities. Many companies across all industries are slow to adopt new technologies, even if they offer clear advantages over current systems, due to user resistance to change or the excessive costs of redesigning proprietary applications to be compatible with modern systems. By not adopting new technologies, capabilities become limited, workers become unable to respond to customer demands in a timely manner, and systems can become overwhelmed to the point of system failure.

A final risk where IT represents a risk to a company’s competitive advantage is in cyber security. Any deficiency in a network’s security model presents a vulnerability that, if attacked with the correct vector, could represent a complete defacing of a company. The single most important aspect of any cyber security plan should be user education. There are a number of hardware and software solutions available to centralize and manage cyber security across an enterprise which provide comprehensive methods to thwart a direct attack from an outside entity, however they can only do so much. Half of all data breaches occur through phishing attacks, “in which unsuspecting users are tricked into downloading malware or handing over personal and business information” (IT Governance Ltd, 2015). These usually come in the form of a legitimate looking email and once the user initiates the connection, the system becomes infected and performs whatever it was programmed to do via the installed malware. The result of a breach could be catastrophic to an organization because of the importance of the actual data lost, and potentially the legal ramifications in the way of lawsuits from divulging protected data, whether inadvertent or on purpose.

IT Support of Competitive Advantage

A clear competitive advantage provided by IT is systems availability. With mission critical systems, redundancy is designed into the system model in an effort to eliminate the risk of system downtime and create 100% availability. While the expense of such a design can reduce net profits, it becomes a strategic advantage because a company is able to provide 24/7 services to their customers, regardless of geographic location. There are many companies moving their customer facing systems into cloud services to provide just that, availability. From online shopping, to financial institutions, to educational facilities, many companies have to provide a 24/7 model in order to meet customer demand and IT is the only way to ensure continuity and consistency across all communication methods.

IT provides a unique benefit for protecting sensitive and proprietary data in that the data can be encrypted to ensure only authorized users can gain access. Some regulations, such as HIPPA and PCI-DSS, stipulate not only data encryption but also low-level, whole drive encryption, using specific algorithms such as AES256 and a shared key pair. Encrypting data, and data communication channels, ensures that no outside party can view the information contained in these data files.

Proper implementation of IT governance can support a company’s competitive advantage because it ensures that all processes designed provide effective and efficient use of company resources using IT as the common thread. Over the past few years, as the value of IT proves its worth to companies looking to remain relevant in an ever changing consumer model, organizations have come to realize how important it is to bring IT goals in alignment with business goals. As an organization grows to meet market conditions, it becomes essential to align these two areas to ensure stability throughout the process. This provides the foundation necessary to ensure continuity as the company evolves.

A fourth way that IT supports a company’s competitive advantage is by enabling the company to be able to adapt quickly to changing markets. When implemented in an elastic model, such as the facilities provided with cloud solutions, companies can respond instantly to spikes in consumer demand with a few clicks of a mouse. By leveraging this model, companies can improve efficiencies, improve worker output, and lower operating costs, thereby increasing revenues and profits. A number of companies have adopted the agile model of development for their products, where concepts are quickly moved from the drawing board, to prototype, to final concept in a short time frame. Issues are fixed as they are found through use in a production environment. IT is the only way that this can be possible because of how the cloud model of scalability provides these resources in a dynamic way, as demanded.

A final way that IT supports an organization’s competitive advantage is through the implementation of a cohesive user education program and the implementation of an information security management system, which is a comprehensive approach to managing cyber security risks that takes into account not only people, but also processes, and technology. Security should be built into every process that any user takes to manipulate data in an information system. Once the physical perimeter of an infrastructure is also secured, users need to be trained to identify phishing attacks and social engineering tactics so they can become a weapon against these attack vectors rather than the weak link. Part of that training should include what cyber security systems are in place, how they protect users and corporate data, and why it is important for users to know this information.

IT Risk Scenario: System Availability

In the course of the author’s career, there was an instance where a major system outage resulted in the company losing a multi-million opportunity to a competitor. The root cause of the system outage was later found to be a misconfigured operating system update, provided by the software manufacturer as a critical update to patch a well-known vulnerability. This misconfigured system update caused every service hosted on the domain servers to reject every all queries from all systems. Since the update was automatically deployed to all servers in the forest, failover switching was not an option. It took over 6 hours to troubleshoot and eventually rebuild the primary server and supporting services to bring the network back online. In that time frame, a bid deadline expired for a major project and the author’s company was removed from consideration. Since they were one of only two companies that services this specific product group, from different factories, the contract was awarded to the competition. It represented a $20 million opportunity that spanned three years across five large developments. Had they been able to submit their bid, they would have saved the client 8% in costs, and over a month in lead-times.

IT Advantage Scenario: Data Privacy and Protection

Data security has become a major consideration for companies of all sizes, and for certain market segments it is a federal edict. Previous to the introduction if HIPPA regulations, the privacy of people’s health records were being mishandled. Data was stored in proprietary formats which increased administrative costs, and was shared with nearly anyone who had a seemingly legitimate need for it, whether that be for patient treatment or insurance carrier marketing purposes. Once public outcry reached critical mass, the Health Insurance Portability and Accountability (HIPPA) act of 1996 was created. HIPPA protects the confidentiality and security of healthcare information, and helps the healthcare industry control administrative costs (TN Department of Health, n.d.).


The implementation of IT systems comes with many risks and rewards for any entity, whether it be a company or a person. The main purpose of IT is to make a company more effective and efficient across all operational parameters. The proper management of the risks and advantages provided by an integrated IT platform can ensure that a business is able to meet the demand of its customers while being in a position to evolve as rapidly as their market does. Once systems and software are setup, security models implemented, and data secured, user education becomes the key component to ensuring that IT provides a secure platform for improved efficiencies and increased effectiveness expected across all job roles.

Garnter. (2013). IT Governance. Retrieved from

IT Governance Ltd. (2015). Federal IT professionals: insiders the greatest cybersecurity threat. Retrieved from

TN Department of Health. (n.d.). HIPAA: Health Insurance Portability and Accountability Act. Retrieved from

Monday, May 25, 2015

Security Systems Development Life Cycle (SecSDLC)

Security Systems Development Life Cycle

When designing information systems there are logical phases which must be considered in order to achieve maximum efficiency and effectiveness throughout the organization in every role. Throughout the six phases of the systems development life cycle (SDLC) it becomes imperative to ensure that security is integrated with each aspect of the platform. When building a security project, the same phases of the SDLC can be adapted to suite. The security systems development life cycle (SecSDLC) shares similarities with the SDLC, however the intent and activities are different. The purpose of this paper is to review and explain the phases of the SecSDLC, discussing the differences between the SDLC, and applicable certifications.


In this phase, the project scope and goals are defined upper management. They provide the process methodologies, expected outcomes, project goals, the budget, and any other relevant constraints. “Frequently, this phase begins with an enterprise information security policy (EISP), which outlines the implementation of a security program within the organization.” (Whitman & Mattord, 2012, p. 26). Teams are organized, problems analyzed, and any additions to scope are defined, discussed, and integrated into the plan. The final stage is a feasibility study to determine if corporate resources are available to support the endeavor. The primary difference from the traditional SDLC is that management defines the project details. In the SDLC the business problems to be solved are researched and developed by the project team.


In this phase, the documents gathers in phase one are studied and a preliminary analysis of the existing security polices is conducted. At the same time, the current threat landscape is evaluated and documented, as are the controls in place to manage or mitigate these threats. Included at this stage is a review of legal considerations that must be integrated into the security plan. The modern global threat landscape is such that any business, small or large, is susceptible to attack from a third party, whether it be directly or indirectly. Certain industries have strict requirements on how data is to be stored, shared, or manipulated. Standards such as HIPPA, NIST, PCI-DSS, the ISO27001 standard, and others provide guidelines for an organization to be certified as complaint with established processes and methods. Some industries require these certifications in order for a company to conduct business in that sector. Understanding state legislations with regards to what computer activities are deemed illegal is vital to the overall plan execution and sets the baseline for the types of security technologies that can be implemented across the enterprise. The risk assessment in this phase identifies, assesses, and evaluates the threats to the organization’s security and data. The final step in this phase is to document the findings and update the feasibility analysis. The main differences between the SDLC at this phase include the examination of legal issues, relevant standards based on the segment within which the company is situated, the completion of a formal risk analysis, and the review of the threat landscape and their underlying controls. Those aspects are specifically unique to the SecSDLC. While considering security within every phase of the SDLC is vital, the focus and scope of security considerations are vastly different compared to the SecSDLC which focuses solely on the security aspect of an information systems.

Logical Design

With the SecSDLC, this phase creates and develops the blueprints for information security across the enterprise. Key policies are examined and implemented, and an incident response plan is generated to ensure business continuity, define what steps are taken when an attack occurs, and what is done to recover from a disastrous event. Similar to the SDLC, applications, data support, and structures are selected considering multiple solutions in an approach to managing threats. Unique to the SecSDLC is the detail involved with securing the SDLC core concepts by analyzing the system security environment, functional security requirements, assurance that the security system developed will perform as expected, cost considerations with regards to hardware, software, personnel, and training, documentation of security controls that are planned or in place, security control development, use case tests and test evaluation methods. The concepts and best practices detailed by the NIST can be seen as a guide throughout this phase with regards to system hardening and expected security measures to be taken to ensure end-to-end security across the enterprise. Project documents are again updated, and as with previous phases, the feasibility study is revisited to determine whether or not to continue the project, and/or whether or not to outsource the project.

Physical Design

The fourth phase of the SecSDLC evaluates the information security technologies needed to support the created blueprint and generate alternative solutions, which dictate the final system design. Technologies evaluated in the logical design phase are the best are selected to support the solutions developed, whether they are custom built or off-the-shelf. A key component to this phase is developing a formal definition of what “success” means for the project implementation to be measured against. The design of physical security measures to support the proposed system are also included at this phase. Project documents are updated, refined, and a feasibility study is conducted to ensure the organization is prepared for system implementation. The final stage of this phase involves the presentation of the design to sponsors and stakeholders for review and final approval. If regulations such as HIPPA and/or PCI-DSS must be adhered to, the physical design the infrastructure components must be modeled after their specific requirements with regards to the machines data is stored on, how these machines are physically accessed, and how the data stored on these machines is disseminated to authorized parties. This is unique to the SecSDLC. While data access control is a standard consideration of any information system, HIPPA, for example, provides specific requirements in order to maintain the privacy of patient records and ensure that their data is only shared with specific authorized personnel within the medical industry. PCI-DSS covers how customer credit card details and identifiable data is stored, used, and accessed within a company’s network.


This phase is similar to that of the SDLC. Selected solutions are purchased or developed, tested, implemented, and tested again. A penetration test could be conducted to ensure that the security measures installed perform as expected and the network resources are protected from third party intrusion. Personnel issues are revaluated, training and education programs conducted, and finally the complete package is presented to upper management for final sign off. The SDLC differs in this phase in that the system developed is rolled out to users for their daily use. The SecSDLC is implemented on the back end by network administrators, as approved by upper management. Aside from accessibility issues that are repaired during testing, the user has no involvement in this phase of the SecSDLC.

Maintenance and Change

This is the most important phase of the SecSDLC because of the evolving threat landscape. Older threats evolve and mature into more dangerous threats, and new threats aim for new attack vectors against system weaknesses. Active and constant monitoring, testing, modification, update, and repair must be conducted on information security systems in order to keep pace with maturing and emerging threats. Zero-day threats pose a significant threat to organizations at the cutting edge of their industry and their security plan must be flexible enough to be able to proactively prevent these threats while also integrating methods of recovery should an attack occur through an unknown vulnerability. This phase is the most different from the SDLC in that the SDLC framework is not designed to anticipate a software attack that requires a degree of application reconstruction. “In information security, the battle for stable, reliable systems is a defensive one” (Whitman & Mattord, 2012, p.29). The constant effort to repair damage and restore data against unseen attackers is a never ending process. Part of this phase includes the perpetual education of all personnel as new threats emerge and the security model is updated because an educated user is a powerful security tool.


The purpose of the SecSDLC is to provide the framework for designing and implementing a secure information system paradigm. Since it is based off the SDLC it shares many similarities in the processes and methods used to develop a comprehensive plan, but the intent and activities are different at each phase. While considering systems security is considered vital to every phase of the SDLC, the SecSDLC focuses solely on the implementation of technologies designed to protect an infrastructure from third party intrusion, data corruption, and data theft. The SDLC develops the systems used within a business, while the SecSDLC develops the system to protect these systems and an organization’s users.

ReferencesWhitman, M.E., & Mattord, H.J. (2012). Principles of Information Security (4th ed.). Retrieved from The University of Phoenix eBook Collection.

Saturday, April 4, 2015

Premise Control and Environmental Factors

"Premise control is the systematic recognition and analysis of assumptions upon which a strategic plan is based, to determine if those assumptions remain valid in changing circumstances and in light of new information." Planning premises are primarily concerned with environmental and industry factors, I will focus on the environmental factors. These are your intangibles, the uncontrollable factors the pose great influence on the success or failure of a strategy. One of the biggest influences on corporate strategy in relation to IT is Web 3.0 - the Internet of Things. The entire world is connected to everything at the speed of light through a complex mesh network of connected devices and computers dubbed the Internet. This poses a massive challenge to the status quo of conducting business in that consumers have a "get it now" mentality. With a few taps of a smartphone, or a few clicks of a mouse, consumers have access to not only information, but nearly anything you can imagine is available instantly. Satisfying this need for instant gratification poses a real threat to older, traditional methods of conducting business, while creating immense opportunities for the organizations that puts together the right mix of goods and services. The younger generations are leading the demand curve, as well as sharing feedback in real-time about the viability and quality of goods and services offered in the digital marketplace, and many companies are having trouble keeping pace. CTO's and business strategists are now having to create ways to remain relevant in this new digital world. It is not longer as simple as creating an online presence, or e-commerce capabilities. With social media taking center stage on real-time feedback, and the sheer volume of information that is shared across these networks, companies need to take advantage of this marketing free-for-all and become engaging with their consumers, vendors, and service providers to form a cohesive and comprehensive ecosystem where all parties can interact with the goal of not only improving the quality of the products being manufactured but also provide instantaneous and enriching C2B & B2B interactions. The Internet provides the medium for anyone to access any information that is available, anywhere they are geographically, and this has dictated a new business landscape. The strategy is no longer to appeal to a particular demographic or region, which used to be very successful for product placement and marketing plans. The strategy has shifted to being universally understandable across every demographic, and every region, everywhere. Established best practices and presentation methods of digital information has created an expectation for all e-commerce providers, whether they be Walmart, or the local eye doctor. Every consumer, old and young, demands that every company they interact with have not only an online presence, but also a mobile app, a multitude of customer service options - phone, email, IM, chat, Skype, FAQ's, an online knowledge base, automated troubleshooting tools, and with larger corporations like Walmart the ability to get in-store service for products purchased online. Companies like Walmart are large enough and have enough resources to remain relevant in the every changing, fast evolving, Internet of Things world, however many companies, even the largest ones, are struggling to remain relevant. The opening example of the ups and downs of Dell Corporation are a perfect example. Dell was late to the table with the mobile computing market as they were focused on consumer PC's, a brief stint in consumer peripherals (printers, etc, which were highly unsuccessful), then the Enterprise where they are still very strong with their Server and managed services platforms. However, as the formal PC becomes something of a niche product group anymore, where old home PC's are being transformed into home servers and left stagnant as tablets, convertible laptops, and smart devices proliferate the global market, Dell feel behind the 8-ball, as it were. As their PC sales dwindled, inventory accumulated, and users looked elsewhere for their technology needs, knocking Dell from the top of the market after a multi-decade dominance globally. The problem is they never had good premise control, thus they did not reevaluate and either change or abandon their existing strategy. The ultimate failure in my opinion was their inability to recognize, ore react to, the paradigm shift in how consumers and businesses conducted business. They stood firm that the PC would remain the dominant force in the market, however Apple was the catalyst of a massive movement with the release of the iPhone - they helped push the mobile convergence to consumers in an easy and pragmatic way. They created a device that was so simple to use, and so connected with everything, that everyone jumped on board, and quickly. Smaller companies and startups rode their coat tails as the mobile marketplace was born. In a short time frame, the entire world was connected via mobile devices. Companies like Google and Samsung were quick to recognize the paradigm shift and started creating systems and services specifically designed for mobile platforms. Fast froward to modern day and the lines are gone - there is no separate platform for desktops and mobiles, all systems now see the same information in the same way. As web development technologies evolved, and modern mobile systems became more powerful than the servers of just a few years ago, what used to be a significant difference in computing power become insignificant on all counts. Premise control has become the name of the game, and the evaluation and management of environmental factors have become the lifeblood of any organization that wants to be successful in the modern business world.

Pearce, John A., and Richard B. Robinson. Strategic Management: Planning for Domestic & Global Competition. 13th ed. New York: McGraw-Hill/Irwin, 2013.

Thursday, November 13, 2014

Digital Security Discussion

Topic of discussion in my Enterprise Models class tonight: Digital Security.  Something I touched on earlier this year.

Our text postulated: "Increasingly opening up their networks and applications to customers, partners, and suppliers using an ever more diverse set of computing devices and networks, businesses can benefit from deploying the latest advances in security technologies."

My Professor said: "My thoughts on this are opposite: by opening up your network, you are inviting trouble and the more trouble you invite in, the more your data will be at risk. I understand what they are hinting at, with a cloud based network, the latest security technologies are always available, therefore, in theory, your data is more secure. Everyone needs to keep in mind though, that for every security patch developed, there are ways around them."

He went on to mention how viruses could affect the cloud as a whole and that companies and individuals moving to cloud-based platforms will become the next target for cyber attacks as the model continues to thrive.

Which is all relevant, however I have a different perspective on digital security. My counter argument to that is user education is the key. I have debated this topic, security and system users, many times over the years. Like most of us in the industry information security is paramount. With the multiple terabytes of data we collect in our home systems, and even more in online interactions, keeping our data safe is really our last defense in privacy and security. As more companies and individuals implant their corporate and personal data upon cloud platforms there is an uneasy sense of comfort for many people, including some seasoned pros. Companies like Google and Microsoft whom both have highly successful cloud models across the board have taken responsibility for ensuring they have more than adequate digital and physical security in their data centers, which to an extent leaves it to assumption that the data and applications they warehouse and host are generally safe from intrusion. Users are the key to this whole ecosystem we have created. This is where user education becomes critical. As most seasoned techies know, in the beginning systems and system operations were highly technical in nature and only the most highly trained or technically creative individuals could initiate and manipulate computer systems. Viruses were something you caught from kids at school or coworkers, not a daily blitz of digital infections numbering in the hundreds of millions perpetually attacking in various forms. As systems got more complex in design but simpler in use the users technical ability level eventually became irrelevant. People ages 1 to 100, and even some very well trained animals, can all navigate systems and digital networks with very little effort. Our systems now do all the work for us, users simply need to provide basic instructions and gentle manipulations, instead of hard coding instruction sets and inventive on-the-fly program generation as was the status quo in the 70's, 80's, and 90's. This idle user perspective is the reason why criminal hackers are still traversing firewalls and breaking encryption algorithms, and they are growing in numbers as is evident by the number of new malware detections and infections quantified annually across all digital platforms and all continents. Educating users on general best practices for system use and maintenance, how to identify potential scams, how to detect spoofing and malformed websites, what to avoid when reading emails or reviewing search results, and which security software is functionally the best whether free or paid is critically important today more than it has ever been. The problem is that the industry has created the lazy user by essentially conveying that security is a given. Microsoft even made a concerted effort by including the Windows Firewall and Windows Defender as a part of its operating system by default so that there was some protection for their users out of the box. This was in response to a large number of users, whom had been infected by one or more viruses, that assumed they were protected because "it's from Microsoft, it has to be safe" which was further from the truth than they could understand. As an educated user that knows how to secure systems and networks, I take it upon myself to ensure that users appreciate they have to set a passwords when logging into various systems and services. I teach about the importance for digital security and how to be more security conscious with their every day interactions. I teach them how to correctly navigate Internet search results (avoiding "ads"), how to understand various security prompts and what they look like so they don't ignore them, what security solutions should be installed and how to identify them, etc. This improved knowledge has created a culture of awareness for my users both at work and at home. I am regularly consulted by my peers on how to secure their own families and how to explain it to their children. This creates a more intelligent user and thereby creates a more intelligent user community at large, making the Internet a bit more secure. All of that said, it only takes a single character missing from source code to give a programmer the ability to break the program and cause havoc, or a user inadvertently installing malware. Even the most seasoned users make these mistakes from time to time because we are all human, and as such we are fundamentally flawed, making no security solution 100% secure because they are developed and used by humans. Best you can do is make every effort to educate and secure, and hope no one targets you because if they want to get in bad enough, they will get in and you won't be able to stop them.


Wednesday, October 15, 2014

Artificial Intelligence and Decision Making

In a recent discussion in my Enterprise Models class, a classmate and I discussed the limitations of Artificial Intelligence theories and human emotions. Here is my response:

From the research I have been doing over the years on AI specifically, one of the biggest challenges is how to program emotions into a computer system. I think there are two primary problems currently. One, and the main problem, is that modern computing technology processes thing in a linear fashion, every time slice of a CPU cycle is occupied by either a 1 or a 0. There is no middle ground, there is no gray area. Everything is black or white, and follows a strict logic rule set. What is currently being done with systems like Watson and Google's web crawler software is using software to simulate scenarios and have the hardware crunch the data, while another part of the software provides the processing logic through algorithmic manipulation thereby creating an intelligent system. Current intelligent systems are limited by the scope of their programming environment. Two, there isn't a programming language that yet exists that can accurately tell a computer how it needs to do what it needs to do in order to understand the logic behind a feeling. Most of the researchers I have found over the years say that technology isn't there yet, and I happen to agree. The possible solution to this quandary could be quantum computing.

With quantum computing a quibit offers a system the ability to see a data stream in two states simultaneously. Each quibit is BOTH on and off (1 and 0) in the same "time slice" of a processing cycle, leveraging the power of superposition and entanglement. This allows the system to perform many operations on the same data stream. Neural networks simulate this through software, but over hardware that still processes data in a linear fashion. What we need is the hardware to perform this, because it can perform it much faster than software could ever process the same data stream. Enter quantum computing. D-Wave Systems is the current leader in true quantum computing with their current D-Wave quantum computer, but their system is highly specialized at the moment due to a lack of programming knowledge...while the system has amazing potential, as you will see form a couple of the links below, no one really truly understands how to use it. There are other links below with details on their system and methodology.

The problem with quantum computing is it requires a completely new way of perceiving computers and also a completely new way for users to interface with computers, not to mention new hardware that performs in ways modern hardware cannot. That is what I see as the next way of technological evolution. As transistors become subatomic through the help of graphine and carbon nanotubes, and technologies like memristors look to shatter our perceptions on information storage capacities and data throughput, quantum computers will become more common place across the landscape. The ability to create a true quantum system capable of processing complex emotional patterns is very real. Once we have a true quantum processor, and a true quantum operating system, then we will not only have the power to process it in fractions of nanosecond but also the programming logic and syntax to leverage an intelligent system, and possibly create a sentient computer system, otherwise known as AI.

AI is an fascinating concept, and exactly why it will be the focus of my post grad work. Quantum computing has been a subject I have dreamed about and followed since I was a young boy, before computers were common place and technology was still considered a super luxury. Today technology is seen as a necessary commodity, but there are still concepts that have yet to be discovered or invented, and quantum computing is currently the field of interest. Once we researchers and scientists figure it out, it will change the world.

D-Wave System References:

Quantum Computing References:

Monday, October 6, 2014

Technological Evolution - Quantum Computing, Memristors, and Nanotechnology

It is amazing how evolution of technology changes perspectives so quickly on the future. With holographic interactive screens currently in use, memristors and atomic-level transistor technologies at our fingertips, and new developments in using light as a means to interact with systems or store system data, the reality of AI and systems like Jarvis are finally able to go from drawing board concept to real life prototype. For as long as I can remember, I have been talking about quantum computing and nanotechnology and how that is the future of systems and human interactions. As a younger teen, when I first started learning about quantum mechanics and ultra microscopic hardware theories, I saw then that the future of computer systems and computer-human interactions were going to be largely logic based and function faster than the speed of human thought. By marrying the concepts of quantum mechanics and advanced computer system theory, intelligent systems and true AI are highly viable and will be here within the current generation. As advances in nanotechnology take transistors to the subatomic level, and theories in quantum computing become a reality, we are quickly going to see the industry change as the traditional system paradigm is shattered, and a new evolution in technology is ushered in - I would call it the quantum age - where Schroedinger's cat can exist in both physical states without the concern of observation tainting the true reality of the objects existence. The potential gains with quantum processors and quantum computing methods that scientists around the world are currently developing into physical models are, at the moment, limited only to manufactured hardware capacities. As physical hardware capacities become perceived as unimportant to system planning schemes - due to advances like the memristor and photonics, including the newest nano-laser (see reference) - the focus can be given to writing programs that can take advantage of this new systems paradigm. What is going to take time is the change in mindset to understand how to use a quantum system because it requires a completely new approach to hardware and software design. Modern systems process data in a linear manner, processing one bit after another based on the time slice protocol programming in to the operating systems and CPU itself. Regardless of how many processors you throw at a system, it still only processes one bit of data at any given time slice. The fastest super computer, China's Tianhe-2, can process more than 6.8 quadrillion bits per second (3.12 million processors x 2.2GHz each = 6,864x10^12 processes per second), but it still only processes one bit at a time. Quantum systems do not function in this manner, they function in a far different reality where a bit can be both a 1 and a 0 simultaneously within a single time slice, though quantum processors would not use a time slice function, it would require something else yet to be defined. As scientists gain a better understanding of how to create a truly quantum computer systems, and quantum capable operating system, we will see technology advance to arenas yet to be discovered. What we once called science fiction, is quickly becoming scientific fact.


References: (nano-laser) (Tianhe-2 details)

Friday, August 22, 2014

Technology Roadmap - Wearables

Since I started working on my Masters in Information Systems, I have been learning a lot about many different aspects of IS.  Aside from it really helping me focus my perspective on what I want to end up researching for my post-grad work, I really have been enjoying all that I am learning, and this recent class (as of this post) is no exception.

The last paper I did in this class, CGMT557 Emerging Technologies & Issues, was to create a technology roadmap for an emerging technology.  While it is something I blogged on about a month ago, I chose wearables to extend the concept into a full plan.  Here's my 2 cents...~Geek

Technology Road Map: Wearables
Current State of Technology
            Wearables are extensions of our smart phones, tablets, and phablets offering a set amount of capabilities that are inferior to our smart devices but highly functional as they are currently designed.  With innovations through miniaturization and improved power efficiencies, curved glass high resolution screens, products like the various takes on the computerized watch accessory, Samsung Gear Fit bracelet accessory and other exercise monitors, Google Glass wearable computers, various applications of systems embedded into clothing for various purposes (muscular development, health monitoring, etc.), biological chips that hold medical conditions and history details embedded under the skin, are all wearable technologies that are already changing how a lot of services are being delivered.  Through improving miniaturization processes and improved manufacturing capabilities through more precision automation systems these wearable technologies will cause market disruption for various products that currently dominate the technology market such as laptop computers and other larger portable computing devices.
Business Initiatives and Drivers & Technology Landscape
            As the mobile workforce continues to expand through thinner and lighter computing devices with more available connections to high-speed access points, many businesses are able to follow their normal workflows without being physically tethered to their offices.  Currently there is a suite of devices that enable the mobile workforce, including smartphones, tablets, and laptop computers.  Their integrated devices, security feature sets, and in some instances rugged designs, lend themselves to providing a highly portable and productive work platform available from any location with a data connection.  As a sales person in a world of light speed communications and instant gratification, being able to access critical customer and product metrics with a couple taps of a fingertip are the difference between generating and landing opportunities versus potentially losing them completely.  Combined with back office line-of-business applications linked through the Internet, the mobile workforce is able to efficiently and effectively conduct business without geographic limitation.  Wearable technologies aim to revolutionize how business is conducting allowing for more efficient multitasking through wearable communication devices, powerful wearable computers, biological microprocessors that can use near field communications to interact with the environment and connect to wireless Internet devices to retrieve data from corporate data warehouses that then use the wearable computers to process and display said information for use and/or sharing.  Nanotech devices that can enable video and audio communications through cybernetic-like implants beaming high quality, high definition signals directly into the users sensory receptors providing for an immersive experience that functions at the speed of thought.  These same nanotech devices, once outfitted with artificial intelligence logic and processing, would become the next generation of executive or administrative assistant, able to recognize trends in a user’s usage patterns to help to anticipate potential reactions to situations and provide guidance on how to successfully navigate the landscape while providing useful data streams of relevant information enabling an intelligent and informed decision process.  When a worker is presented with all the relevant data pertaining to a situation and is able to perceive all the potential outcomes of reactions to interactions, with the assistance of intelligent nanotechnologies, they are able to make the best choice for a given situation resulting in improved satisfaction, a higher probability of positive outcomes, and in turn increased revenues. 

Gap Analysis & Migration Strategy
            In order for wearable technologies to successfully transition into the enterprise on a wide-spread basis, there are a few key gaps that need to be addressed as this emerging technology evolves.  The first gap to be addressed is the technologies themselves, as a majority of these capabilities are either in their early stages of development or are only partially implemented.  As mentioned, wearable technologies are currently used as accessories for their larger host devices integrating key functionality into said accessory, such as voice-to-text/text-to-voice capabilities, capturing of health data for monitoring purposes, both capable without the use of large or complex devices that may or may not be portable themselves.  In order for wearable devices to successfully evolve into independent computing systems, circuit, transistor, and storage technologies must continue to miniaturize to nano-scale form factors.  With the recent developments of carbon nanotubes and memristors these microscopic form factors are becoming reality.  There is a group out of Australia that has successfully created a nano-transistor that is a single phosphorus atom, whose atomic radii is 0.098 nanometers.  This is a direct step into nano-transistors that, once the research is complete, will result in sub-nano scale computing methods, and is should lead to quantum computing.  This would establish the foundation for very powerful systems that could be easily embedded into biological hosts to enable the advanced collaboration and communication methods necessary to conduct business in the next generation. The next gap to analyze would be embedding these systems into biological hosts, taking advantage of the bioelectricity generated to maintain continuous power states as well as neurologically connecting said bio-hosts to these nano systems to provide cohesive functionality that does not impede either entity.  Currently no solutions exist, however neural and material sciences have made advances creating technologies that can mimic such environments, and thus lead to an understanding on how to interface with them directly through biological and chemical processes.
            The Federal Communications Commission (n.d.) website states that they regulate interstate and international communications by radio, television, wire, satellite and cable in all 50 states, the District of Columbia and U.S. territories.  They are the primary authority for communications law, regulation and technological innovation.  As such, they would be responsible for mandating policy on how to manage the integration of nano devices into mainstream use and where their use is inappropriate.  As the industry evolves and technologies continue to shrink, the FCC will be at the forefront of determining how and when the use of these technologies is ultimately appropriate for public integration once the core infrastructure is in place.  Currently, there are no specific laws dictating how or when these devices can be used, only that they cannot actively interfere with other electronics, and must receive interference from other electronics, such as is the standard mandate of all consumer electronics based on the stamp shown on each device approved by the FCC for use.
            There has been a shift happening the past couple decades that the author has been tracking along with some peers.  As technology advances and devices continue to shrink in size while increasing in power users are following suit by moving from clunky desktop systems, to laptops, to ultra-books, to tablets, smart phones, and now wearables.  With as capable as wearable computing is commercially available today, combined with the research being done in nanotechnology and artificial intelligence and cloud-based service offerings and vast storage facilities, the future of wearable computers is already well in hand, with more innovations coming as we begin to fully understand how to manipulate and integrate such technologies as nanotubes and nanowires to allow us to take computing capabilities down to microscopic levels.  The potential is nearly limitless, with the ability to theoretically build nanomachines that are self-sufficient, self-reliant, and highly aware.  Wearable microprocessors that are embedded in a person’s skin could be the hub that enables personal interactions with our various devices and daily system interactions, also medical facilities, civil and government facilities, as well as large scale advertisements to provide a highly customized and personal experience not previously capable.  There are privacy and security considerations to be understood, which will require that regulations be put into place to protect the providers of these devices as much as it protects the users of wearable devices.  Those can only be realized as these technologies continue to be developed and infiltrate the professional realm.

98 Pm in nm. (2014). Retrieved from
Anthony, S. (2013). Killing silicon: Inside IBM’s carbon nanotube computer chip lab. Retrieved from
Federal Communication Commission. (n.d.). What We Do. Retrieved from
University of Phoenix. (2014). Week three supporting activity: effect of emerging technologies on services. Retrieved from University of Phoenix, CMGT557 - Emerging Technologies and Issues website.
Size of phosphorus in several environments. (n.d.). Retrieved from
Smith, D. (2012). Nano-transistor breakthrough to offer billion times faster computer. Retrieved from