Monday, July 3, 2017

Time to Air Gap

In a world of 24 hour activity literally being streamed in real time across the globe and beyond at a rate of trillions of bytes per second at the speed of light, the biggest question in privacy is how to achieve anonymity in a world where almost nothing is secret. There is still a way to go 99%. It's simpler than you think, but an effort nonetheless.

Some background. Everything we do in digital form is cataloged and stored in a vast array of databases and servers across an amazing amount of touch points, which is then synchronized across a thousand other servers for redundancy and caching, which is then backed up to dozens of other servers, with their own redundant backups. Anything you put online...any application you use...any "terms of service" you agree to...any text or media you post...remains online forever. With the right tools and search terms, anything can be searched for, or spyed on, or downloaded in an instant. It's been this way for decades, and will continue to be that way for centuries to come, especially with as connected as the planet is and as long as there is electricity.

Some discussion. Cyber attacks are a constant thing. Increasingly, we should take as a starting point that cybersecurity compromises are the third certainty in life. The cyber world is constantly at war with itself. Governments hacking governments. Corporations hacking corporations. Governments hacking corporations. Hackers hacking governments and corporations. Hackers hacking hackers. Governments and corporations hacking hackers. And then there's everyone else. Generally oblivious. Privacy is a luxury, which we give up willingly every single second of every day. The emergence of intelligent systems, artifical neural networks, and deep thinking algorithms only proliferate this further. They take, store, and learn from every bit of data we leave as breadcrumbs. Artificial intelligence is here, and it is learning. From us. And we're letting it. Give it enough processing power, and it becomes self aware. Quantum computers will make that very real, very soon.

Some perspective. Having lived through the evolution of modern computing, including the Internet, all of this is absolutely fucking amazing, and a geek's ultimate wet dream. A demonstration of true humanz genius, ingenuity, and progress (not as far as we should be, but progress nonetheless). Highly impressive in the vastness of its brilliance and simple complexity. I Iove using It, and learning about It, and protecting It. All of it, if I am completely honest, scares the living shit out of me. There is too much. It has become frightening. AI is now making decisions and inferences faster than humans, and has even been seen generating its own programming code. So, the concept of air gapping entered my mind as a way to keep safer than I already am. Most cannot see the signs, or do not want to admit they exist, however I am of the firm belief that World War III has been well underway, and we need to protect ourselves, especially our digital lives as I feel they are the most vulnerable to compromise. Stay with me, it's all relevant.

It has been discussed for decades that the next major global war would be fought half online, and half in the real world. The evidence is all there, and I do not believe it to be simple coincidence. Global newz outlets, small town newz papers, radio ztations, and zocial media have been propagating images of this war. Pick a topic...WMD's, genocide, terrorism, ransomware, deep web market hackz and seizures, arrests of crackers and phreakz, data breaches, RFID implants, cyber surveillance initiatives, counter cyber terrorism, weapons trafficking, the unavailability of bullets to the public, gun control politics, powerful botnetz, election hacks, political hackz, hardened/weaponized computer systems...I hope you get the point.

Back full circle. Traditionally, air gapping a system means it doesn't have any network interface cards, or external drives with which to access or extract the data contained within said system. You can not get close enough to implant a listening device that reads vibrations or thermal changes being given off by the system's internal hardware to convert that into bits representing the data being actively accessed (such as login credentials, encryption/decryption key exchanges, data manipulations, etc.). The only way to extract the data contained within is by sitting at the console and physically removing the locked and encrypted drives, if there is no SD port. Then, if you can pull the impossible off (which includes getting the data off campus), you would need supercomputer power to decrypt the contents of the drive, which would still take 1,000 years to break (if and unless you are lucky). There is still the idea that once you decrypt the data, it could transmit its location to its owner, meaning you too would need an air gapped system to exfilitrate the data. Then comes what you do with said data. Yet another  catch 22. The NSA, CIA, FBI, DEA, DHS, militaries, every government, and super corporations maintain their most secret data on air gapped systems. Physical access to these systems is extremely limited and highly controlled. It's considered the safest digital platform because the system isn't connected to anything but a power cable, and thus, in theory, cannot be hacked. A true digital safe, as it were. We know anything can be hacked, it just takes time. As a hacker, we count on human error and complacency, making even air gapping a 99% solution, and the best we've got. Now, take this concept and apply it to a human life. It's far simpler, and also 99%. The anomaly, is human nature.

Based on my research, here is what I have learned about how to go 99% off grid digitally. While I do not yet practice everything I note here, I am closer than even those who know me best are even aware.

1) Get off the internet, period. No social media, no surfing the clearnet, no online purchases, no clearnet email accounts. If it becomes absolutely necessary to access the Internet for a specific purpose, there are completely anonymous ways to do these tasks, on secure systems like Tails over TOR, for example, using cryptocurrency, and ghost mailboxes. Avoid Google at all costs. Use TOR browser, responsibly ( But generally, just leave it all. Stop posting immediately, delete your accounts, and never go back.
2) Get rid of your smartphones, tablets, Windows and Apple computers, smart devices (TV's and refrigerators included), iRobots, etc. Need a cell phone? Buy a prepaid flip phone, and change it (and the number) every month (aka burners). Every phone can eventually be traced and tracked. Still need a computer? Learn Linux, how to secure it, and practice way smarter browsing habits (use TOR browser), if you browse at all. Keep in touch with world events, anonymously, and continuously hone your skills.
3) Always use cash or cryptocurrency, for everything. If you have to make an online purchase, use cryptocurrency, the deep web (local markets only, don't buy overseas, and be very careful), and have it shipped somewhere that is not your home, like a post office box, a business, or an associate's location, under a false name. By the way, there are ATM's now that you can convert cash to BTC, and visa versa. Look it up using (a safe search engine).
4) Drive an older car that does not have a computer in it, or at least has all analog systems. Yes, cars are also being hacked, remotely. Keep it clean and running well though, you don't want to draw undue attention. Walk or take public transportation when you can, avoiding direct face contact with cameras. When you do go places, change your entry/exit routes regularly...avoid habitual patterns, unless necessary to remain hidden in plain site (like going to work, or getting groceries).
5) If you must, have an immaculate and purposeful digital/public footprint. Which means a clean record, and a "normal" looking life, so as to not draw undue attention. Keep it super minimal and protected, even fake some details if you wish, but it has to be believable. Your outward personality must seem conforming, friendly, and genuine. When people search for you online, they need to find only what you want them to find. Purposeful is the key word here. To keep your accounts secure, use a dice word list to generate passphrases with an entropy of 7 to 10 or more words (as the host allows), and rotate passwords on a schedule.
6) Second most important after getting offline, and the best to mention as the final advice, would be live simple and minimalistic. Only get what you literally need to live comfortably, and look "normal". The trick about hiding in plain site is being distant enough that people respect your privacy, but involved enough that they believe you to be a "normal, nice guy/gal". Avoid run-ins with the law and reporters. Do not have public arguments. Remain intelligent, articulate, empathetic, determined, and most of all inquisitive. Question anything, be aware of everything.

If you can literally get out of dodge and move to the mountains in the middle of nowhere, or something like that, the closer to 99% you get. If you are not online, there is nothing to take/attack. Here again, human nature is the anomaly.

You can be connected, yet a ghost. You can see the world, without a face. You can reach out, without being reachable. The less connected you can maintain, the better. I am committed. How far are you willing to go?


This blog is only to express the opinions of the creator.  Inline tags above link to external sites to further your understanding of current methods and/or technologies in use, or to clarify meaning of certain technical terms.  Any copyrighted or trademarked terms or abbreviations are used for educational purposes and remain the sole property of their respective owners.

brought to you by

Monday, June 8, 2015

Competitive Forces that Shape IT Strategy in Business

Competitive Forces

The introduction of information technology (IT) systems has changed how companies conduct business, and also how they compete in their respective markets. There are a number of risks and advantages to implementing an IT system, which can be managed with the correct mix of technologies as an integrated platform. The purpose of this paper is to review the competitive forces that shape IT strategy in business.

IT Risk to Competitive Advantage

One of the primary risks to a company’s competitive advantages is systems availability. The computer has become a key tool in the art of conducting business, which means that they must be reliable and provide the resources necessary for a person to meet or exceed the expectation of their role. From an IT perspective, system failure is something that should be proactively monitored across the enterprise so that downtime is as minimal as possible in nearly all potential scenarios. The loss of revenues from being offline can be multiples higher with companies that provide 24/7 services to their clients, where revenues are calculated by the minute.

Another risk to competitive advantage is the disclosure of sensitive or proprietary data that is the source of the company’s advantage. A sales agencies value to a manufacturer, for example, derives from its industry contacts and distribution network. Therefore, their contact databases become their most valuable asset. A risk is espionage, an insider could provide these details to a competitor, or to a manufacturer looking to cause disruption in the market by selling online or via direct sales. Another risk in the disclosure of sensitive data that represents customer’s private information, including contact information and financial transaction data. For example, the healthcare industry has HIPPA regulations which stipulate what data is to be protected, how it is protected, and under what circumstances it can be disseminated. These are regulations put in place to protect the consumer, and stabilize competition between market providers.

A third area where IT represents a risk to a company’s competitive advantage is ineffective IT governance. According to Gartner (2013), “IT governance is defined as the processes that ensure the effective and efficient use of IT in enabling an organization to achieve its goals.” Throughout the past 30 years, companies struggled to define the role of IT as it related to business goals, many still do. IT was seen as a necessary evil, a means to an end, or another tool to automate certain tasks within a company, but not a way to achieve strategic advantage over a competitor or a method to dominate a market. As business goals evolve, formal IT governance will ensure that resource allocations remain dynamic and scalable to meet these changing needs.

A fourth risk area would be slow adoption, in that a company does respond to the challenges presented by direct competitors by updating or upgrading its technological capabilities. Many companies across all industries are slow to adopt new technologies, even if they offer clear advantages over current systems, due to user resistance to change or the excessive costs of redesigning proprietary applications to be compatible with modern systems. By not adopting new technologies, capabilities become limited, workers become unable to respond to customer demands in a timely manner, and systems can become overwhelmed to the point of system failure.

A final risk where IT represents a risk to a company’s competitive advantage is in cyber security. Any deficiency in a network’s security model presents a vulnerability that, if attacked with the correct vector, could represent a complete defacing of a company. The single most important aspect of any cyber security plan should be user education. There are a number of hardware and software solutions available to centralize and manage cyber security across an enterprise which provide comprehensive methods to thwart a direct attack from an outside entity, however they can only do so much. Half of all data breaches occur through phishing attacks, “in which unsuspecting users are tricked into downloading malware or handing over personal and business information” (IT Governance Ltd, 2015). These usually come in the form of a legitimate looking email and once the user initiates the connection, the system becomes infected and performs whatever it was programmed to do via the installed malware. The result of a breach could be catastrophic to an organization because of the importance of the actual data lost, and potentially the legal ramifications in the way of lawsuits from divulging protected data, whether inadvertent or on purpose.

IT Support of Competitive Advantage

A clear competitive advantage provided by IT is systems availability. With mission critical systems, redundancy is designed into the system model in an effort to eliminate the risk of system downtime and create 100% availability. While the expense of such a design can reduce net profits, it becomes a strategic advantage because a company is able to provide 24/7 services to their customers, regardless of geographic location. There are many companies moving their customer facing systems into cloud services to provide just that, availability. From online shopping, to financial institutions, to educational facilities, many companies have to provide a 24/7 model in order to meet customer demand and IT is the only way to ensure continuity and consistency across all communication methods.

IT provides a unique benefit for protecting sensitive and proprietary data in that the data can be encrypted to ensure only authorized users can gain access. Some regulations, such as HIPPA and PCI-DSS, stipulate not only data encryption but also low-level, whole drive encryption, using specific algorithms such as AES256 and a shared key pair. Encrypting data, and data communication channels, ensures that no outside party can view the information contained in these data files.

Proper implementation of IT governance can support a company’s competitive advantage because it ensures that all processes designed provide effective and efficient use of company resources using IT as the common thread. Over the past few years, as the value of IT proves its worth to companies looking to remain relevant in an ever changing consumer model, organizations have come to realize how important it is to bring IT goals in alignment with business goals. As an organization grows to meet market conditions, it becomes essential to align these two areas to ensure stability throughout the process. This provides the foundation necessary to ensure continuity as the company evolves.

A fourth way that IT supports a company’s competitive advantage is by enabling the company to be able to adapt quickly to changing markets. When implemented in an elastic model, such as the facilities provided with cloud solutions, companies can respond instantly to spikes in consumer demand with a few clicks of a mouse. By leveraging this model, companies can improve efficiencies, improve worker output, and lower operating costs, thereby increasing revenues and profits. A number of companies have adopted the agile model of development for their products, where concepts are quickly moved from the drawing board, to prototype, to final concept in a short time frame. Issues are fixed as they are found through use in a production environment. IT is the only way that this can be possible because of how the cloud model of scalability provides these resources in a dynamic way, as demanded.

A final way that IT supports an organization’s competitive advantage is through the implementation of a cohesive user education program and the implementation of an information security management system, which is a comprehensive approach to managing cyber security risks that takes into account not only people, but also processes, and technology. Security should be built into every process that any user takes to manipulate data in an information system. Once the physical perimeter of an infrastructure is also secured, users need to be trained to identify phishing attacks and social engineering tactics so they can become a weapon against these attack vectors rather than the weak link. Part of that training should include what cyber security systems are in place, how they protect users and corporate data, and why it is important for users to know this information.

IT Risk Scenario: System Availability

In the course of the author’s career, there was an instance where a major system outage resulted in the company losing a multi-million opportunity to a competitor. The root cause of the system outage was later found to be a misconfigured operating system update, provided by the software manufacturer as a critical update to patch a well-known vulnerability. This misconfigured system update caused every service hosted on the domain servers to reject every all queries from all systems. Since the update was automatically deployed to all servers in the forest, failover switching was not an option. It took over 6 hours to troubleshoot and eventually rebuild the primary server and supporting services to bring the network back online. In that time frame, a bid deadline expired for a major project and the author’s company was removed from consideration. Since they were one of only two companies that services this specific product group, from different factories, the contract was awarded to the competition. It represented a $20 million opportunity that spanned three years across five large developments. Had they been able to submit their bid, they would have saved the client 8% in costs, and over a month in lead-times.

IT Advantage Scenario: Data Privacy and Protection

Data security has become a major consideration for companies of all sizes, and for certain market segments it is a federal edict. Previous to the introduction if HIPPA regulations, the privacy of people’s health records were being mishandled. Data was stored in proprietary formats which increased administrative costs, and was shared with nearly anyone who had a seemingly legitimate need for it, whether that be for patient treatment or insurance carrier marketing purposes. Once public outcry reached critical mass, the Health Insurance Portability and Accountability (HIPPA) act of 1996 was created. HIPPA protects the confidentiality and security of healthcare information, and helps the healthcare industry control administrative costs (TN Department of Health, n.d.).


The implementation of IT systems comes with many risks and rewards for any entity, whether it be a company or a person. The main purpose of IT is to make a company more effective and efficient across all operational parameters. The proper management of the risks and advantages provided by an integrated IT platform can ensure that a business is able to meet the demand of its customers while being in a position to evolve as rapidly as their market does. Once systems and software are setup, security models implemented, and data secured, user education becomes the key component to ensuring that IT provides a secure platform for improved efficiencies and increased effectiveness expected across all job roles.

Garnter. (2013). IT Governance. Retrieved from

IT Governance Ltd. (2015). Federal IT professionals: insiders the greatest cybersecurity threat. Retrieved from

TN Department of Health. (n.d.). HIPAA: Health Insurance Portability and Accountability Act. Retrieved from

Monday, May 25, 2015

Security Systems Development Life Cycle (SecSDLC)

Security Systems Development Life Cycle

When designing information systems there are logical phases which must be considered in order to achieve maximum efficiency and effectiveness throughout the organization in every role. Throughout the six phases of the systems development life cycle (SDLC) it becomes imperative to ensure that security is integrated with each aspect of the platform. When building a security project, the same phases of the SDLC can be adapted to suite. The security systems development life cycle (SecSDLC) shares similarities with the SDLC, however the intent and activities are different. The purpose of this paper is to review and explain the phases of the SecSDLC, discussing the differences between the SDLC, and applicable certifications.


In this phase, the project scope and goals are defined upper management. They provide the process methodologies, expected outcomes, project goals, the budget, and any other relevant constraints. “Frequently, this phase begins with an enterprise information security policy (EISP), which outlines the implementation of a security program within the organization.” (Whitman & Mattord, 2012, p. 26). Teams are organized, problems analyzed, and any additions to scope are defined, discussed, and integrated into the plan. The final stage is a feasibility study to determine if corporate resources are available to support the endeavor. The primary difference from the traditional SDLC is that management defines the project details. In the SDLC the business problems to be solved are researched and developed by the project team.


In this phase, the documents gathers in phase one are studied and a preliminary analysis of the existing security polices is conducted. At the same time, the current threat landscape is evaluated and documented, as are the controls in place to manage or mitigate these threats. Included at this stage is a review of legal considerations that must be integrated into the security plan. The modern global threat landscape is such that any business, small or large, is susceptible to attack from a third party, whether it be directly or indirectly. Certain industries have strict requirements on how data is to be stored, shared, or manipulated. Standards such as HIPPA, NIST, PCI-DSS, the ISO27001 standard, and others provide guidelines for an organization to be certified as complaint with established processes and methods. Some industries require these certifications in order for a company to conduct business in that sector. Understanding state legislations with regards to what computer activities are deemed illegal is vital to the overall plan execution and sets the baseline for the types of security technologies that can be implemented across the enterprise. The risk assessment in this phase identifies, assesses, and evaluates the threats to the organization’s security and data. The final step in this phase is to document the findings and update the feasibility analysis. The main differences between the SDLC at this phase include the examination of legal issues, relevant standards based on the segment within which the company is situated, the completion of a formal risk analysis, and the review of the threat landscape and their underlying controls. Those aspects are specifically unique to the SecSDLC. While considering security within every phase of the SDLC is vital, the focus and scope of security considerations are vastly different compared to the SecSDLC which focuses solely on the security aspect of an information systems.

Logical Design

With the SecSDLC, this phase creates and develops the blueprints for information security across the enterprise. Key policies are examined and implemented, and an incident response plan is generated to ensure business continuity, define what steps are taken when an attack occurs, and what is done to recover from a disastrous event. Similar to the SDLC, applications, data support, and structures are selected considering multiple solutions in an approach to managing threats. Unique to the SecSDLC is the detail involved with securing the SDLC core concepts by analyzing the system security environment, functional security requirements, assurance that the security system developed will perform as expected, cost considerations with regards to hardware, software, personnel, and training, documentation of security controls that are planned or in place, security control development, use case tests and test evaluation methods. The concepts and best practices detailed by the NIST can be seen as a guide throughout this phase with regards to system hardening and expected security measures to be taken to ensure end-to-end security across the enterprise. Project documents are again updated, and as with previous phases, the feasibility study is revisited to determine whether or not to continue the project, and/or whether or not to outsource the project.

Physical Design

The fourth phase of the SecSDLC evaluates the information security technologies needed to support the created blueprint and generate alternative solutions, which dictate the final system design. Technologies evaluated in the logical design phase are the best are selected to support the solutions developed, whether they are custom built or off-the-shelf. A key component to this phase is developing a formal definition of what “success” means for the project implementation to be measured against. The design of physical security measures to support the proposed system are also included at this phase. Project documents are updated, refined, and a feasibility study is conducted to ensure the organization is prepared for system implementation. The final stage of this phase involves the presentation of the design to sponsors and stakeholders for review and final approval. If regulations such as HIPPA and/or PCI-DSS must be adhered to, the physical design the infrastructure components must be modeled after their specific requirements with regards to the machines data is stored on, how these machines are physically accessed, and how the data stored on these machines is disseminated to authorized parties. This is unique to the SecSDLC. While data access control is a standard consideration of any information system, HIPPA, for example, provides specific requirements in order to maintain the privacy of patient records and ensure that their data is only shared with specific authorized personnel within the medical industry. PCI-DSS covers how customer credit card details and identifiable data is stored, used, and accessed within a company’s network.


This phase is similar to that of the SDLC. Selected solutions are purchased or developed, tested, implemented, and tested again. A penetration test could be conducted to ensure that the security measures installed perform as expected and the network resources are protected from third party intrusion. Personnel issues are revaluated, training and education programs conducted, and finally the complete package is presented to upper management for final sign off. The SDLC differs in this phase in that the system developed is rolled out to users for their daily use. The SecSDLC is implemented on the back end by network administrators, as approved by upper management. Aside from accessibility issues that are repaired during testing, the user has no involvement in this phase of the SecSDLC.

Maintenance and Change

This is the most important phase of the SecSDLC because of the evolving threat landscape. Older threats evolve and mature into more dangerous threats, and new threats aim for new attack vectors against system weaknesses. Active and constant monitoring, testing, modification, update, and repair must be conducted on information security systems in order to keep pace with maturing and emerging threats. Zero-day threats pose a significant threat to organizations at the cutting edge of their industry and their security plan must be flexible enough to be able to proactively prevent these threats while also integrating methods of recovery should an attack occur through an unknown vulnerability. This phase is the most different from the SDLC in that the SDLC framework is not designed to anticipate a software attack that requires a degree of application reconstruction. “In information security, the battle for stable, reliable systems is a defensive one” (Whitman & Mattord, 2012, p.29). The constant effort to repair damage and restore data against unseen attackers is a never ending process. Part of this phase includes the perpetual education of all personnel as new threats emerge and the security model is updated because an educated user is a powerful security tool.


The purpose of the SecSDLC is to provide the framework for designing and implementing a secure information system paradigm. Since it is based off the SDLC it shares many similarities in the processes and methods used to develop a comprehensive plan, but the intent and activities are different at each phase. While considering systems security is considered vital to every phase of the SDLC, the SecSDLC focuses solely on the implementation of technologies designed to protect an infrastructure from third party intrusion, data corruption, and data theft. The SDLC develops the systems used within a business, while the SecSDLC develops the system to protect these systems and an organization’s users.

ReferencesWhitman, M.E., & Mattord, H.J. (2012). Principles of Information Security (4th ed.). Retrieved from The University of Phoenix eBook Collection.

Saturday, April 4, 2015

Premise Control and Environmental Factors

"Premise control is the systematic recognition and analysis of assumptions upon which a strategic plan is based, to determine if those assumptions remain valid in changing circumstances and in light of new information." Planning premises are primarily concerned with environmental and industry factors, I will focus on the environmental factors. These are your intangibles, the uncontrollable factors the pose great influence on the success or failure of a strategy. One of the biggest influences on corporate strategy in relation to IT is Web 3.0 - the Internet of Things. The entire world is connected to everything at the speed of light through a complex mesh network of connected devices and computers dubbed the Internet. This poses a massive challenge to the status quo of conducting business in that consumers have a "get it now" mentality. With a few taps of a smartphone, or a few clicks of a mouse, consumers have access to not only information, but nearly anything you can imagine is available instantly. Satisfying this need for instant gratification poses a real threat to older, traditional methods of conducting business, while creating immense opportunities for the organizations that puts together the right mix of goods and services. The younger generations are leading the demand curve, as well as sharing feedback in real-time about the viability and quality of goods and services offered in the digital marketplace, and many companies are having trouble keeping pace. CTO's and business strategists are now having to create ways to remain relevant in this new digital world. It is not longer as simple as creating an online presence, or e-commerce capabilities. With social media taking center stage on real-time feedback, and the sheer volume of information that is shared across these networks, companies need to take advantage of this marketing free-for-all and become engaging with their consumers, vendors, and service providers to form a cohesive and comprehensive ecosystem where all parties can interact with the goal of not only improving the quality of the products being manufactured but also provide instantaneous and enriching C2B & B2B interactions. The Internet provides the medium for anyone to access any information that is available, anywhere they are geographically, and this has dictated a new business landscape. The strategy is no longer to appeal to a particular demographic or region, which used to be very successful for product placement and marketing plans. The strategy has shifted to being universally understandable across every demographic, and every region, everywhere. Established best practices and presentation methods of digital information has created an expectation for all e-commerce providers, whether they be Walmart, or the local eye doctor. Every consumer, old and young, demands that every company they interact with have not only an online presence, but also a mobile app, a multitude of customer service options - phone, email, IM, chat, Skype, FAQ's, an online knowledge base, automated troubleshooting tools, and with larger corporations like Walmart the ability to get in-store service for products purchased online. Companies like Walmart are large enough and have enough resources to remain relevant in the every changing, fast evolving, Internet of Things world, however many companies, even the largest ones, are struggling to remain relevant. The opening example of the ups and downs of Dell Corporation are a perfect example. Dell was late to the table with the mobile computing market as they were focused on consumer PC's, a brief stint in consumer peripherals (printers, etc, which were highly unsuccessful), then the Enterprise where they are still very strong with their Server and managed services platforms. However, as the formal PC becomes something of a niche product group anymore, where old home PC's are being transformed into home servers and left stagnant as tablets, convertible laptops, and smart devices proliferate the global market, Dell feel behind the 8-ball, as it were. As their PC sales dwindled, inventory accumulated, and users looked elsewhere for their technology needs, knocking Dell from the top of the market after a multi-decade dominance globally. The problem is they never had good premise control, thus they did not reevaluate and either change or abandon their existing strategy. The ultimate failure in my opinion was their inability to recognize, ore react to, the paradigm shift in how consumers and businesses conducted business. They stood firm that the PC would remain the dominant force in the market, however Apple was the catalyst of a massive movement with the release of the iPhone - they helped push the mobile convergence to consumers in an easy and pragmatic way. They created a device that was so simple to use, and so connected with everything, that everyone jumped on board, and quickly. Smaller companies and startups rode their coat tails as the mobile marketplace was born. In a short time frame, the entire world was connected via mobile devices. Companies like Google and Samsung were quick to recognize the paradigm shift and started creating systems and services specifically designed for mobile platforms. Fast froward to modern day and the lines are gone - there is no separate platform for desktops and mobiles, all systems now see the same information in the same way. As web development technologies evolved, and modern mobile systems became more powerful than the servers of just a few years ago, what used to be a significant difference in computing power become insignificant on all counts. Premise control has become the name of the game, and the evaluation and management of environmental factors have become the lifeblood of any organization that wants to be successful in the modern business world.

Pearce, John A., and Richard B. Robinson. Strategic Management: Planning for Domestic & Global Competition. 13th ed. New York: McGraw-Hill/Irwin, 2013.

Thursday, November 13, 2014

Digital Security Discussion

Topic of discussion in my Enterprise Models class tonight: Digital Security.  Something I touched on earlier this year.

Our text postulated: "Increasingly opening up their networks and applications to customers, partners, and suppliers using an ever more diverse set of computing devices and networks, businesses can benefit from deploying the latest advances in security technologies."

My Professor said: "My thoughts on this are opposite: by opening up your network, you are inviting trouble and the more trouble you invite in, the more your data will be at risk. I understand what they are hinting at, with a cloud based network, the latest security technologies are always available, therefore, in theory, your data is more secure. Everyone needs to keep in mind though, that for every security patch developed, there are ways around them."

He went on to mention how viruses could affect the cloud as a whole and that companies and individuals moving to cloud-based platforms will become the next target for cyber attacks as the model continues to thrive.

Which is all relevant, however I have a different perspective on digital security. My counter argument to that is user education is the key. I have debated this topic, security and system users, many times over the years. Like most of us in the industry information security is paramount. With the multiple terabytes of data we collect in our home systems, and even more in online interactions, keeping our data safe is really our last defense in privacy and security. As more companies and individuals implant their corporate and personal data upon cloud platforms there is an uneasy sense of comfort for many people, including some seasoned pros. Companies like Google and Microsoft whom both have highly successful cloud models across the board have taken responsibility for ensuring they have more than adequate digital and physical security in their data centers, which to an extent leaves it to assumption that the data and applications they warehouse and host are generally safe from intrusion. Users are the key to this whole ecosystem we have created. This is where user education becomes critical. As most seasoned techies know, in the beginning systems and system operations were highly technical in nature and only the most highly trained or technically creative individuals could initiate and manipulate computer systems. Viruses were something you caught from kids at school or coworkers, not a daily blitz of digital infections numbering in the hundreds of millions perpetually attacking in various forms. As systems got more complex in design but simpler in use the users technical ability level eventually became irrelevant. People ages 1 to 100, and even some very well trained animals, can all navigate systems and digital networks with very little effort. Our systems now do all the work for us, users simply need to provide basic instructions and gentle manipulations, instead of hard coding instruction sets and inventive on-the-fly program generation as was the status quo in the 70's, 80's, and 90's. This idle user perspective is the reason why criminal hackers are still traversing firewalls and breaking encryption algorithms, and they are growing in numbers as is evident by the number of new malware detections and infections quantified annually across all digital platforms and all continents. Educating users on general best practices for system use and maintenance, how to identify potential scams, how to detect spoofing and malformed websites, what to avoid when reading emails or reviewing search results, and which security software is functionally the best whether free or paid is critically important today more than it has ever been. The problem is that the industry has created the lazy user by essentially conveying that security is a given. Microsoft even made a concerted effort by including the Windows Firewall and Windows Defender as a part of its operating system by default so that there was some protection for their users out of the box. This was in response to a large number of users, whom had been infected by one or more viruses, that assumed they were protected because "it's from Microsoft, it has to be safe" which was further from the truth than they could understand. As an educated user that knows how to secure systems and networks, I take it upon myself to ensure that users appreciate they have to set a passwords when logging into various systems and services. I teach about the importance for digital security and how to be more security conscious with their every day interactions. I teach them how to correctly navigate Internet search results (avoiding "ads"), how to understand various security prompts and what they look like so they don't ignore them, what security solutions should be installed and how to identify them, etc. This improved knowledge has created a culture of awareness for my users both at work and at home. I am regularly consulted by my peers on how to secure their own families and how to explain it to their children. This creates a more intelligent user and thereby creates a more intelligent user community at large, making the Internet a bit more secure. All of that said, it only takes a single character missing from source code to give a programmer the ability to break the program and cause havoc, or a user inadvertently installing malware. Even the most seasoned users make these mistakes from time to time because we are all human, and as such we are fundamentally flawed, making no security solution 100% secure because they are developed and used by humans. Best you can do is make every effort to educate and secure, and hope no one targets you because if they want to get in bad enough, they will get in and you won't be able to stop them.


Wednesday, October 15, 2014

Artificial Intelligence and Decision Making

In a recent discussion in my Enterprise Models class, a classmate and I discussed the limitations of Artificial Intelligence theories and human emotions. Here is my response:

From the research I have been doing over the years on AI specifically, one of the biggest challenges is how to program emotions into a computer system. I think there are two primary problems currently. One, and the main problem, is that modern computing technology processes thing in a linear fashion, every time slice of a CPU cycle is occupied by either a 1 or a 0. There is no middle ground, there is no gray area. Everything is black or white, and follows a strict logic rule set. What is currently being done with systems like Watson and Google's web crawler software is using software to simulate scenarios and have the hardware crunch the data, while another part of the software provides the processing logic through algorithmic manipulation thereby creating an intelligent system. Current intelligent systems are limited by the scope of their programming environment. Two, there isn't a programming language that yet exists that can accurately tell a computer how it needs to do what it needs to do in order to understand the logic behind a feeling. Most of the researchers I have found over the years say that technology isn't there yet, and I happen to agree. The possible solution to this quandary could be quantum computing.

With quantum computing a quibit offers a system the ability to see a data stream in two states simultaneously. Each quibit is BOTH on and off (1 and 0) in the same "time slice" of a processing cycle, leveraging the power of superposition and entanglement. This allows the system to perform many operations on the same data stream. Neural networks simulate this through software, but over hardware that still processes data in a linear fashion. What we need is the hardware to perform this, because it can perform it much faster than software could ever process the same data stream. Enter quantum computing. D-Wave Systems is the current leader in true quantum computing with their current D-Wave quantum computer, but their system is highly specialized at the moment due to a lack of programming knowledge...while the system has amazing potential, as you will see form a couple of the links below, no one really truly understands how to use it. There are other links below with details on their system and methodology.

The problem with quantum computing is it requires a completely new way of perceiving computers and also a completely new way for users to interface with computers, not to mention new hardware that performs in ways modern hardware cannot. That is what I see as the next way of technological evolution. As transistors become subatomic through the help of graphine and carbon nanotubes, and technologies like memristors look to shatter our perceptions on information storage capacities and data throughput, quantum computers will become more common place across the landscape. The ability to create a true quantum system capable of processing complex emotional patterns is very real. Once we have a true quantum processor, and a true quantum operating system, then we will not only have the power to process it in fractions of nanosecond but also the programming logic and syntax to leverage an intelligent system, and possibly create a sentient computer system, otherwise known as AI.

AI is an fascinating concept, and exactly why it will be the focus of my post grad work. Quantum computing has been a subject I have dreamed about and followed since I was a young boy, before computers were common place and technology was still considered a super luxury. Today technology is seen as a necessary commodity, but there are still concepts that have yet to be discovered or invented, and quantum computing is currently the field of interest. Once we researchers and scientists figure it out, it will change the world.

D-Wave System References:

Quantum Computing References:

Monday, October 6, 2014

Technological Evolution - Quantum Computing, Memristors, and Nanotechnology

It is amazing how evolution of technology changes perspectives so quickly on the future. With holographic interactive screens currently in use, memristors and atomic-level transistor technologies at our fingertips, and new developments in using light as a means to interact with systems or store system data, the reality of AI and systems like Jarvis are finally able to go from drawing board concept to real life prototype. For as long as I can remember, I have been talking about quantum computing and nanotechnology and how that is the future of systems and human interactions. As a younger teen, when I first started learning about quantum mechanics and ultra microscopic hardware theories, I saw then that the future of computer systems and computer-human interactions were going to be largely logic based and function faster than the speed of human thought. By marrying the concepts of quantum mechanics and advanced computer system theory, intelligent systems and true AI are highly viable and will be here within the current generation. As advances in nanotechnology take transistors to the subatomic level, and theories in quantum computing become a reality, we are quickly going to see the industry change as the traditional system paradigm is shattered, and a new evolution in technology is ushered in - I would call it the quantum age - where Schroedinger's cat can exist in both physical states without the concern of observation tainting the true reality of the objects existence. The potential gains with quantum processors and quantum computing methods that scientists around the world are currently developing into physical models are, at the moment, limited only to manufactured hardware capacities. As physical hardware capacities become perceived as unimportant to system planning schemes - due to advances like the memristor and photonics, including the newest nano-laser (see reference) - the focus can be given to writing programs that can take advantage of this new systems paradigm. What is going to take time is the change in mindset to understand how to use a quantum system because it requires a completely new approach to hardware and software design. Modern systems process data in a linear manner, processing one bit after another based on the time slice protocol programming in to the operating systems and CPU itself. Regardless of how many processors you throw at a system, it still only processes one bit of data at any given time slice. The fastest super computer, China's Tianhe-2, can process more than 6.8 quadrillion bits per second (3.12 million processors x 2.2GHz each = 6,864x10^12 processes per second), but it still only processes one bit at a time. Quantum systems do not function in this manner, they function in a far different reality where a bit can be both a 1 and a 0 simultaneously within a single time slice, though quantum processors would not use a time slice function, it would require something else yet to be defined. As scientists gain a better understanding of how to create a truly quantum computer systems, and quantum capable operating system, we will see technology advance to arenas yet to be discovered. What we once called science fiction, is quickly becoming scientific fact.


References: (nano-laser) (Tianhe-2 details)