Showing posts with label infrastructure. Show all posts
Showing posts with label infrastructure. Show all posts

Monday, July 22, 2024

#CrowdStrike Cause a Global Tech Outage - what happened, why, and (how) can it be prevented?

While the memes are amazingly good, and there's a lot of jest being spewed across the interwebs, this is a serious event with massive implications. So, in all seriousness, let's review the facts of the #CrowdStrike situation from 19-Jul-2024: 

As reported across global news outlets and the internets, a security company called CrowdStrike caused some chaos. There are cascading impacts across many industries. 

We are already seeing impacts: 
://courier service delays (UPS, FedEx, DHL, etc.) 
://flight delays/cancellations at the airport 
://small business closing for the day 
://websites being inaccessible 
://hospitals cancelling surgeries/treatments 
://municipalities being closed 
://government services being delayed 
among many other cascading effects that could last days, or weeks. 

While a major inconvenience, the bug was quickly resolved within CrowdStrike's system, so (as of publish date) the latest binaries are stable. Recovery will be slow and tedious, especially for larger networks, but the world will recover from this. 

What happened? As is being reported, a bug introduced during a routine update of their Falcon EDR software (anti-virus software run by millions and millions of customers) caused what is known as a kernel panic within the Windows operating system - we are seeing this manifest as a "bugcheck error" (aka - the Blue Screen Of Death , or #BSOD) on Windows machines. It does not affect #Apple or #Linux devices. Note: It is NOT a #Microsoft problem. 

How can we prevent this? Short answer, WE as users can't. However, this isn't the first time a large global tech vendor has caused major outages across the globe, and it won't be the last. 

How can CrowdStrike, or any another company, prevent this? Simply, adhering to the SDLC methodologies, adequate QA testing, and never do a full production roll out without fully testing in the field. A common practice is to deploy to 10% of the network and see how systems and users respond (yes sysadmins, you can do targeted deployments even if you don't have network segmentation in place). If all goes well, push to 25% and test again, then 50% and test again, then the full push. That way when a problem does occur, it doesn't take out everything and can be quickly fixed before a full production push. It's really IT Ops 101 - not that difficult. This is a good example of why you should backup your critical data frequently: whether to an external device, or a cloud storage facility (Google Drive, Dropbox, OneDrive, etc.). You should do this personally as often as you feel is necessary. Most companies have policies governing backup types, schedules, and testing methodologies. 

For my enterprise admins reading this, I hope you have a solid (and tested) backup methodology in place. Yes, you should test-restore your backups at least once per year, if not more often. If you can't restore the data, then what is the point of backing it up? 

So now the big question is, how does this issue get fixed? Well, it's a hands-on-machine fix (which means long days/nights and weekends for IT staffers for a bit). Since the devices are unable to boot, there's no back-of-house configuration that we admins can set to fix this. We literally have to put our hands on the device. The methodology is simple, and only takes about 5 minutes to do - but multiply that over hundreds, thousands, or even hundreds-of-thousands of devices and you can quickly see this is not a quick fix at scale. It is an even bigger nightmare for remote workers, who would need to be walked through the fix via telephone, making it a 30min fix (at best). In those cases, from my perspective, it makes more sense to send them a replacement machine that is not bricked, then reset the trouble device once back in hand. Hopefully you have the inventory ready and waiting, otherwise you need to grab a company credit card and hit up every electronics store in your city. What a fucking PITA. 

CrowdStrike's official guidance can be found on their webpage here: https://www.crowdstrike.com/falcon-content-update-remediation-and-guidance-hub/ (external link). 

While all of this is happening, myself and most of my peers agree that CrowdStrike is still a quality vendor offering quality security products and services. This was just a BIG fuckup from whoever pushes out their updates. Clearly, someone did not follow protocol. 

As of this writing, CrowdStrike is the second largest security vendor in the world, which is why the impact of this was as massive as it was...and the cascade effect isn't done yet. There will be more fall out from this, not to mention the legal cases that could be brought against them in the aftermath due to the downtime. 

One of the biggest fallouts of this mess is phishing attacks - threat actors spinning up malicious domains claiming to fix the issue (they won't, they just want your money); emails being sent claiming to be able to fix the issue with "a click" (using a piggy-back technique to install a payload on your machine to do god knows what; oh and steal your money too). Please do not fall for the phish. It's won't end well for you, or your employer. 

There is no "easy button" here peeps. Just a massive Pain In The Ass. 

#StayCyberSecure 
#BeCyberAware

Monday, June 8, 2015

Competitive Forces that Shape IT Strategy in Business



Competitive Forces

The introduction of information technology (IT) systems has changed how companies conduct business, and also how they compete in their respective markets. There are a number of risks and advantages to implementing an IT system, which can be managed with the correct mix of technologies as an integrated platform. The purpose of this paper is to review the competitive forces that shape IT strategy in business.

IT Risk to Competitive Advantage

One of the primary risks to a company’s competitive advantages is systems availability. The computer has become a key tool in the art of conducting business, which means that they must be reliable and provide the resources necessary for a person to meet or exceed the expectation of their role. From an IT perspective, system failure is something that should be proactively monitored across the enterprise so that downtime is as minimal as possible in nearly all potential scenarios. The loss of revenues from being offline can be multiples higher with companies that provide 24/7 services to their clients, where revenues are calculated by the minute.

Another risk to competitive advantage is the disclosure of sensitive or proprietary data that is the source of the company’s advantage. A sales agencies value to a manufacturer, for example, derives from its industry contacts and distribution network. Therefore, their contact databases become their most valuable asset. A risk is espionage, an insider could provide these details to a competitor, or to a manufacturer looking to cause disruption in the market by selling online or via direct sales. Another risk in the disclosure of sensitive data that represents customer’s private information, including contact information and financial transaction data. For example, the healthcare industry has HIPPA regulations which stipulate what data is to be protected, how it is protected, and under what circumstances it can be disseminated. These are regulations put in place to protect the consumer, and stabilize competition between market providers.

A third area where IT represents a risk to a company’s competitive advantage is ineffective IT governance. According to Gartner (2013), “IT governance is defined as the processes that ensure the effective and efficient use of IT in enabling an organization to achieve its goals.” Throughout the past 30 years, companies struggled to define the role of IT as it related to business goals, many still do. IT was seen as a necessary evil, a means to an end, or another tool to automate certain tasks within a company, but not a way to achieve strategic advantage over a competitor or a method to dominate a market. As business goals evolve, formal IT governance will ensure that resource allocations remain dynamic and scalable to meet these changing needs.

A fourth risk area would be slow adoption, in that a company does respond to the challenges presented by direct competitors by updating or upgrading its technological capabilities. Many companies across all industries are slow to adopt new technologies, even if they offer clear advantages over current systems, due to user resistance to change or the excessive costs of redesigning proprietary applications to be compatible with modern systems. By not adopting new technologies, capabilities become limited, workers become unable to respond to customer demands in a timely manner, and systems can become overwhelmed to the point of system failure.

A final risk where IT represents a risk to a company’s competitive advantage is in cyber security. Any deficiency in a network’s security model presents a vulnerability that, if attacked with the correct vector, could represent a complete defacing of a company. The single most important aspect of any cyber security plan should be user education. There are a number of hardware and software solutions available to centralize and manage cyber security across an enterprise which provide comprehensive methods to thwart a direct attack from an outside entity, however they can only do so much. Half of all data breaches occur through phishing attacks, “in which unsuspecting users are tricked into downloading malware or handing over personal and business information” (IT Governance Ltd, 2015). These usually come in the form of a legitimate looking email and once the user initiates the connection, the system becomes infected and performs whatever it was programmed to do via the installed malware. The result of a breach could be catastrophic to an organization because of the importance of the actual data lost, and potentially the legal ramifications in the way of lawsuits from divulging protected data, whether inadvertent or on purpose.

IT Support of Competitive Advantage

A clear competitive advantage provided by IT is systems availability. With mission critical systems, redundancy is designed into the system model in an effort to eliminate the risk of system downtime and create 100% availability. While the expense of such a design can reduce net profits, it becomes a strategic advantage because a company is able to provide 24/7 services to their customers, regardless of geographic location. There are many companies moving their customer facing systems into cloud services to provide just that, availability. From online shopping, to financial institutions, to educational facilities, many companies have to provide a 24/7 model in order to meet customer demand and IT is the only way to ensure continuity and consistency across all communication methods.

IT provides a unique benefit for protecting sensitive and proprietary data in that the data can be encrypted to ensure only authorized users can gain access. Some regulations, such as HIPPA and PCI-DSS, stipulate not only data encryption but also low-level, whole drive encryption, using specific algorithms such as AES256 and a shared key pair. Encrypting data, and data communication channels, ensures that no outside party can view the information contained in these data files.

Proper implementation of IT governance can support a company’s competitive advantage because it ensures that all processes designed provide effective and efficient use of company resources using IT as the common thread. Over the past few years, as the value of IT proves its worth to companies looking to remain relevant in an ever changing consumer model, organizations have come to realize how important it is to bring IT goals in alignment with business goals. As an organization grows to meet market conditions, it becomes essential to align these two areas to ensure stability throughout the process. This provides the foundation necessary to ensure continuity as the company evolves.

A fourth way that IT supports a company’s competitive advantage is by enabling the company to be able to adapt quickly to changing markets. When implemented in an elastic model, such as the facilities provided with cloud solutions, companies can respond instantly to spikes in consumer demand with a few clicks of a mouse. By leveraging this model, companies can improve efficiencies, improve worker output, and lower operating costs, thereby increasing revenues and profits. A number of companies have adopted the agile model of development for their products, where concepts are quickly moved from the drawing board, to prototype, to final concept in a short time frame. Issues are fixed as they are found through use in a production environment. IT is the only way that this can be possible because of how the cloud model of scalability provides these resources in a dynamic way, as demanded.

A final way that IT supports an organization’s competitive advantage is through the implementation of a cohesive user education program and the implementation of an information security management system, which is a comprehensive approach to managing cyber security risks that takes into account not only people, but also processes, and technology. Security should be built into every process that any user takes to manipulate data in an information system. Once the physical perimeter of an infrastructure is also secured, users need to be trained to identify phishing attacks and social engineering tactics so they can become a weapon against these attack vectors rather than the weak link. Part of that training should include what cyber security systems are in place, how they protect users and corporate data, and why it is important for users to know this information.

IT Risk Scenario: System Availability

In the course of the author’s career, there was an instance where a major system outage resulted in the company losing a multi-million opportunity to a competitor. The root cause of the system outage was later found to be a misconfigured operating system update, provided by the software manufacturer as a critical update to patch a well-known vulnerability. This misconfigured system update caused every service hosted on the domain servers to reject every all queries from all systems. Since the update was automatically deployed to all servers in the forest, failover switching was not an option. It took over 6 hours to troubleshoot and eventually rebuild the primary server and supporting services to bring the network back online. In that time frame, a bid deadline expired for a major project and the author’s company was removed from consideration. Since they were one of only two companies that services this specific product group, from different factories, the contract was awarded to the competition. It represented a $20 million opportunity that spanned three years across five large developments. Had they been able to submit their bid, they would have saved the client 8% in costs, and over a month in lead-times.

IT Advantage Scenario: Data Privacy and Protection

Data security has become a major consideration for companies of all sizes, and for certain market segments it is a federal edict. Previous to the introduction if HIPPA regulations, the privacy of people’s health records were being mishandled. Data was stored in proprietary formats which increased administrative costs, and was shared with nearly anyone who had a seemingly legitimate need for it, whether that be for patient treatment or insurance carrier marketing purposes. Once public outcry reached critical mass, the Health Insurance Portability and Accountability (HIPPA) act of 1996 was created. HIPPA protects the confidentiality and security of healthcare information, and helps the healthcare industry control administrative costs (TN Department of Health, n.d.).

Conclusion

The implementation of IT systems comes with many risks and rewards for any entity, whether it be a company or a person. The main purpose of IT is to make a company more effective and efficient across all operational parameters. The proper management of the risks and advantages provided by an integrated IT platform can ensure that a business is able to meet the demand of its customers while being in a position to evolve as rapidly as their market does. Once systems and software are setup, security models implemented, and data secured, user education becomes the key component to ensuring that IT provides a secure platform for improved efficiencies and increased effectiveness expected across all job roles.





References
Garnter. (2013). IT Governance. Retrieved from http://www.gartner.com/it-glossary/it-governance

IT Governance Ltd. (2015). Federal IT professionals: insiders the greatest cybersecurity threat. Retrieved from http://www.itgovernanceusa.com/blog/federal-it-professionals-insiders-the-greatest-cybersecurity-threat/

TN Department of Health. (n.d.). HIPAA: Health Insurance Portability and Accountability Act. Retrieved from http://health.state.tn.us/hipaa/

Thursday, November 13, 2014

Digital Security Discussion

Topic of discussion in my Enterprise Models class tonight: Digital Security.  Something I touched on earlier this year.

Our text postulated: "Increasingly opening up their networks and applications to customers, partners, and suppliers using an ever more diverse set of computing devices and networks, businesses can benefit from deploying the latest advances in security technologies."

My Professor said: "My thoughts on this are opposite: by opening up your network, you are inviting trouble and the more trouble you invite in, the more your data will be at risk. I understand what they are hinting at, with a cloud based network, the latest security technologies are always available, therefore, in theory, your data is more secure. Everyone needs to keep in mind though, that for every security patch developed, there are ways around them."

He went on to mention how viruses could affect the cloud as a whole and that companies and individuals moving to cloud-based platforms will become the next target for cyber attacks as the model continues to thrive.

Which is all relevant, however I have a different perspective on digital security. My counter argument to that is user education is the key. I have debated this topic, security and system users, many times over the years. Like most of us in the industry information security is paramount. With the multiple terabytes of data we collect in our home systems, and even more in online interactions, keeping our data safe is really our last defense in privacy and security. As more companies and individuals implant their corporate and personal data upon cloud platforms there is an uneasy sense of comfort for many people, including some seasoned pros. Companies like Google and Microsoft whom both have highly successful cloud models across the board have taken responsibility for ensuring they have more than adequate digital and physical security in their data centers, which to an extent leaves it to assumption that the data and applications they warehouse and host are generally safe from intrusion. Users are the key to this whole ecosystem we have created. This is where user education becomes critical. As most seasoned techies know, in the beginning systems and system operations were highly technical in nature and only the most highly trained or technically creative individuals could initiate and manipulate computer systems. Viruses were something you caught from kids at school or coworkers, not a daily blitz of digital infections numbering in the hundreds of millions perpetually attacking in various forms. As systems got more complex in design but simpler in use the users technical ability level eventually became irrelevant. People ages 1 to 100, and even some very well trained animals, can all navigate systems and digital networks with very little effort. Our systems now do all the work for us, users simply need to provide basic instructions and gentle manipulations, instead of hard coding instruction sets and inventive on-the-fly program generation as was the status quo in the 70's, 80's, and 90's. This idle user perspective is the reason why criminal hackers are still traversing firewalls and breaking encryption algorithms, and they are growing in numbers as is evident by the number of new malware detections and infections quantified annually across all digital platforms and all continents. Educating users on general best practices for system use and maintenance, how to identify potential scams, how to detect spoofing and malformed websites, what to avoid when reading emails or reviewing search results, and which security software is functionally the best whether free or paid is critically important today more than it has ever been. The problem is that the industry has created the lazy user by essentially conveying that security is a given. Microsoft even made a concerted effort by including the Windows Firewall and Windows Defender as a part of its operating system by default so that there was some protection for their users out of the box. This was in response to a large number of users, whom had been infected by one or more viruses, that assumed they were protected because "it's from Microsoft, it has to be safe" which was further from the truth than they could understand. As an educated user that knows how to secure systems and networks, I take it upon myself to ensure that users appreciate they have to set a passwords when logging into various systems and services. I teach about the importance for digital security and how to be more security conscious with their every day interactions. I teach them how to correctly navigate Internet search results (avoiding "ads"), how to understand various security prompts and what they look like so they don't ignore them, what security solutions should be installed and how to identify them, etc. This improved knowledge has created a culture of awareness for my users both at work and at home. I am regularly consulted by my peers on how to secure their own families and how to explain it to their children. This creates a more intelligent user and thereby creates a more intelligent user community at large, making the Internet a bit more secure. All of that said, it only takes a single character missing from source code to give a programmer the ability to break the program and cause havoc, or a user inadvertently installing malware. Even the most seasoned users make these mistakes from time to time because we are all human, and as such we are fundamentally flawed, making no security solution 100% secure because they are developed and used by humans. Best you can do is make every effort to educate and secure, and hope no one targets you because if they want to get in bad enough, they will get in and you won't be able to stop them.

~Geek

Sunday, October 5, 2014

Technological Evolution - Quantum Computing, Memristors, and Nanotechnology

It is amazing how evolution of technology changes perspectives so quickly on the future. With holographic interactive screens currently in use, memristors and atomic-level transistor technologies at our fingertips, and new developments in using light as a means to interact with systems or store system data, the reality of AI and systems like Jarvis are finally able to go from drawing board concept to real life prototype. For as long as I can remember, I have been talking about quantum computing and nanotechnology and how that is the future of systems and human interactions. As a younger teen, when I first started learning about quantum mechanics and ultra microscopic hardware theories, I saw then that the future of computer systems and computer-human interactions were going to be largely logic based and function faster than the speed of human thought. By marrying the concepts of quantum mechanics and advanced computer system theory, intelligent systems and true AI are highly viable and will be here within the current generation. As advances in nanotechnology take transistors to the subatomic level, and theories in quantum computing become a reality, we are quickly going to see the industry change as the traditional system paradigm is shattered, and a new evolution in technology is ushered in - I would call it the quantum age - where Schroedinger's cat can exist in both physical states without the concern of observation tainting the true reality of the objects existence. The potential gains with quantum processors and quantum computing methods that scientists around the world are currently developing into physical models are, at the moment, limited only to manufactured hardware capacities. As physical hardware capacities become perceived as unimportant to system planning schemes - due to advances like the memristor and photonics, including the newest nano-laser (see reference) - the focus can be given to writing programs that can take advantage of this new systems paradigm. What is going to take time is the change in mindset to understand how to use a quantum system because it requires a completely new approach to hardware and software design. Modern systems process data in a linear manner, processing one bit after another based on the time slice protocol programming in to the operating systems and CPU itself. Regardless of how many processors you throw at a system, it still only processes one bit of data at any given time slice. The fastest super computer, China's Tianhe-2, can process more than 6.8 quadrillion bits per second (3.12 million processors x 2.2GHz each = 6,864x10^12 processes per second), but it still only processes one bit at a time. Quantum systems do not function in this manner, they function in a far different reality where a bit can be both a 1 and a 0 simultaneously within a single time slice, though quantum processors would not use a time slice function, it would require something else yet to be defined. As scientists gain a better understanding of how to create a truly quantum computer systems, and quantum capable operating system, we will see technology advance to arenas yet to be discovered. What we once called science fiction, is quickly becoming scientific fact.

~Geek


References:http://www.sciencealert.com.au/news/20143009-26256.html (nano-laser)

http://www.top500.org/system/177999 (Tianhe-2 details)

Monday, July 28, 2014

Wearable Technologies: An Academic Discussion


For the moment, wearables are extensions of our smart phones and phablets offering a set amount of capabilities that are inferior to our smart devices but highly functional as they are currently designed. With innovations through miniaturization and improved power efficiencies, curved glass high resolution screens, products like the various takes on the computerized watch accessory, Samsung Gear Fit bracelet accessory and other exercise monitors, Google Glass wearable computers, various applications of systems embedded into clothing for various purposes (muscular development, health monitoring, etc.), biological chips that hold medical conditions and history details embedded under the skin, are all wearable technologies that are already changing how a lot of services are being delivered. Through improving miniaturization processes and improved manufacturing capabilities through more precision automation systems these wearable technologies will cause market disruption for various products that currently dominate the technology market such as laptop computers and other larger portable computing devices. There has been an shift happening the past couple decades that I have been tracking along with some peers. As technology advances and devices continue to shrink in size while increasing in power users are following suit by moving from clunky desktop systems, to laptops, to ultra books, to tablets, smart phones, and now wearables. The newest small, accessory-like devices have more computing power contained in them, and technical capability, than did the first dozen computers I owned growing up, combined. With as capable as wearable computing is commercially available today, combined with the research being done in nanotechnology and artificial intelligence and cloud-based service offerings and vast storage facilities, the future of wearable computers is already well in hand, with more innovations coming as we begin to fully understand how to manipulate and integrate such technologies as nanotubes and nanowires to allow us to take computing capabilities down to microscopic levels. The potential is nearly limitless, with the ability to theoretically build nanomachines that are self sufficient, self reliant, and highly aware, that could be able to repair genetic defects within DNA that result in terminal illnesses, mental disabilities, or other debilitating genetic predispositions passed down through the generations. Wearable microprocessors that are embedded in a persons skin could be the hub that enables personal interactions with our various devices and daily system interactions, also medical facilities, civil and government facilities, as well as large scale advertisements to provide a highly customized and personal experience not previously capable. Are there be privacy concerns, of course. Will there be instances of data theft, of course. Will it be a deterrent for mass adoption of such systems, no I do not think so. This is not different than the current state of things with our smart phones, tablets, phablets, laptops, flash drives, and cloud-based facilities. With as convenient as it becomes for dealing with usually stressful situations, such as going to the doctor, visiting a busy DMV, or being able to pay for products in a crowded store quickly, people would begin to see the benefits of convenience begin to far outweigh the potential invasions of privacy. Being able to have your personal details quickly at hand, regardless of what level of detail the user decides to include, does provide the basis for process innovation through technological innovation. Wearables will become the primary outlet for the next generation of data sharing and digital interaction.




What do you think about the future of wearable technologies? ~Geek

Sunday, March 16, 2014

Hackers & Stuxnet: Education & Best Practices can Change Perspectives

Speaking of hackers in general, the video "Creating panic in Estonia" was well done.  It speaks to aspects of cyber security I have touched on personally with peers and users who are generally unaware of how dangerous the Internet can be, and do not understand how they should be protecting themselves and in turn the rest of the user community at large.  The global dependency on the Internet as a necessary aspect of daily life can, and may, eventually lead to its demise.  It used to be seen as a tool, something that made research easier or necessitated more efficient processing of goods and services to customers.  The global Internet is far more interconnected than most people can comprehend.  We, as IT pros and field experts, find ourselves at a unique crossroads when it comes to the cyber realm.  On one hand, we are users who find entertainment and conduct business transactions through the Internet.  We keep in touch with friends, relatives, and associates.  Pay bills and send gifts to people, through the Internet.  We curiously investigate other perspectives on anything we can think of, reachable with a simple search string.  On another hand we develop and/or service information systems and are responsible for ensuring that the users are not their own worst enemy, and the executive stakeholders understand why expenses are necessary to ensure seamless operations while maintaining data security and integrity through digital interactions across the company or across the Internet.  Yet another proverbial fork is that of a hacker.  Not necessarily one that breaks into systems with malicious intent, which are primarily the hackers (criminals) most people hear about, but the white hat hacker who, like the man who works for Kaspersky in Russia, looks to improve the quality and safety of the Internet.  In order to beat a hacker, one must be able to think like a hacker, and have the intimately specific knowledge of software and systems, how they interconnect, and how users interact with them.  This whole-view perspective on digital communications is necessary in order to properly safeguard oneself, and also the global user community.  Education and repetitive reinforcement have been the successful combination for me in getting users at all levels to start to invest in cyber security and take a different view on what they share on the web.  People contact me regularly to clean infections from their systems and networks.  Unfortunately, most of these users are of the mindset that "it should just work, and never give me problems, regardless of what I do" which has no substantive basis in reality.  In reality, it takes the combination of dozens, if not hundreds, of software applications to make a system function the way it does today.  Since no user situation is ever the lab-tested "ideal" situation, users must be educated on not only how to use their system, but a basic level understanding of how their use affects everyone connected to their system, and the subsequent systems the collective interacts with.  As experiences and exposure to different interactive scenarios manifest, continued education is the key which not only makes a better user but a better system as a whole.

The Stuxnet event could have been avoided with the right engineer designing security protocols, establishing policies, and integrating hardware solutions designed specifically at denying access to unauthorized devices on a network.  Granted, Estonia may not have had access to the technology necessary to affect such a system, but those types of systems do exist.  It is this reason why companies like SymantecMcAfee, and Kaspersky have integrated a feature into their anti-everything software packages to instantly scan any removable device attached to a protected system.  Granted, the Stuxnet did not yet have a known signature and thus could not be specifically scanned for, but those packages also have zero-day detection capabilities, meaning they have an algorithm designed into the software to detect virus-like patterns and flag them as suspicious - which is how Stuxnet was ultimately found, through a zero-day detection algorithm.  While they are highly effective, they can only be detected on a system with this type of software installed.  Unfortunately, there is a large number of users who do not have protective software, let alone hardware solutions, installed in their systems and/or networks which leaves them, and anyone they connect to, highly vulnerable.  Here again, education is the key - once the people can be made to understand the risks involved, they will be willing to learn how to best safeguard themselves, which protects everyone else they interact with digitally.  It is like getting an immunization for a disease - if everyone gets the shot, then no one can transfer it or get it from someone who is infected or has not had their immunization.  The shot cures any carriers of the disease, prevents spreading of the disease to others, and does not allow the inoculated to become carriers again.  That is the same philosophy of security software, and important for the same reasons.

~Geek

Reference: Video On Demand - http://digital.films.com/PortalPlaylists.aspx?aid=7967&xtid=50121&loid=182367

Sunday, March 9, 2014

A discussion about RFID


My class is talking about the viability of using RFID technology to subnet networks as an alternative to moving away from IPv4 because IPv6 is so complex, so I did some research.

The Internet is running out of public IP addresses to assign to websites and devices publicly connected to the Internet.  The current standard in use, IPv4, supports about 4.3 billion addresses (2^32), with more than 588 million of those assigned to the private address range which is not routable on the public Internet (you see those ranges in your office or home LAN). The next version of the IP protocol is IPv6, which supports 3.4 x 10^38 (340 undecillion) (2^128) unique addresses in total, but only 42 undecillion (2^41) have been made available at the moment by ICANN...that is enough for about 4,096 unique IP's per person in the world, assuming 8 billion souls and /48 allocations by ISPs.

One of my classmates proposed that we use RFID chips embedded in a person to enable subnetting of devices they use; such as computers, tablets, smartphones, game systems, appliances, along with NFC (Near Field Communication) tags that use the RFID tag for systems access, even opening your front door and paying for groceries, etc. - using the RFID tag in the person as the host with the public IPv4 address, and the private IPv4 range for anything that connects through the tag.  All device communications go through the RFID chip embedded in the person to some access point or NAT device.

While in concept the idea seems logical and relevant to the future of interactivity through a relatively cheap tech to work with (averaging between $0.05-$0.17 per RFID tag) that has a broad range of possibilities, I do not think this is a viable alternative to avoid moving to IPv6.  Two primary reasons stick out in my mind as to why: 1) That I know of our can find, RFID technology does not contain the logic processing, nor the physical hardware capacity, to negotiate the infrastructure methods necessary to make this plausible, especially when implanted in a human (biochemical considerations), and 2) a highly sensitive consideration of a person's privacy as these RFID tags can store contact details, bank account information, and, with NFC sensors/readers installed, your activities.  Most would be fine with this level of invasiveness because it would simplify life interactions across personal and professional spaces (and it in all honesty could), a lot more would not because literally everything you do would be tracked and cataloged for whomever is linked to the system to extract and use for whatever they need the data for.

Assuming security is properly implemented, specific privacy issues resolved, and technological evolution to allow RFID tags to work like this - a hypothetical win-win for all sides - it would only delay the inevitable.  We would still run out of IPv4 addresses sooner than later.  Implanting tags in every person adds nearly 8 billion more unique addresses, which is more than the total capacity of IPv4.  Trying to update a system that embedded (surgically inserted into a person) would cost everyone so much money it would defeat its entire purpose.

Because we only have a couple billion IPv4 addresses left available world-wide, the move to IPv6 is already well under way.  While I like certain aspects of RFID use - retail sales tracking like Wal-Mart uses for automated restock orders, or vehicle tracking used by trucking lines and manufacturers, processing payment through the RFID chip in a credit/debit car, etc., making processes more efficient and effective for their respective purposes -  it scares the crap out of me with the direction a lot of authorities want to take the technology.  I won't go into the conspiracy theory side of RFID (that is a lengthy conversation), but the implication that it will one day be used to track us and all our activities (yes, all of them) is real.  Here's the real question - given the expected future of RFID, would you get chipped, even if only for the purpose of instant access to your medical records and payment for goods and services?

~Geek

References
http://electronics.howstuffworks.com/gadgets/high-tech-gadgets/rfid.htm
http://rednectar.net/2012/05/24/just-how-many-ipv6-addresses-are-there-really/
http://spectrum.ieee.org/semiconductors/processors/the-plastic-processor
http://itknowledgeexchange.techtarget.com/whatis/ipv6-addresses-how-many-is-that-in-numbers/

This blog is only to express the opinions of the creator.  Inline tags above link to external sites to further your understanding of current methods and/or technologies in use, or to clarify meaning of certain technical terms.  Any copyrighted or trademarked terms or abbreviations are used for educational purposes and remain the sole property of their respective owners.


Sunday, October 21, 2012

What is an enterprise system is and how can this design support testing processes?




An enterprise system is a compilation of separate but related modulated wares that are integrated with a single cohesive database, with multiple interfaces, to achieve the business purpose of an organization across multiple departments, in an effort to consolidate separate legacy systems and improve overall efficiencies. Enterprise systems are complex by nature but with the right planning and proper execution success can be reached and the benefits to an organization can be immense.

The enterprise system paradigm supports testing processes by offering a wide range of potential test cases for every aspect of an organization, effectively enabling developers and systems engineers to better the system as a whole for the entire enterprise in specific ways. The concept behind enterprise systems is integration through modulation, empowering organizations to perform any function required and change the system on demand and/or based on local need. For example, an accounting module that is functionally sound for US locations will calculate salaries differently than what is required for a European location. Currencies are different, taxes are different, pay scales are different, etc. As such, modified/localized versions of modules allow the organization to deploy localized versions of the accounting module, while still integrating data to the central database. This allows executives from any locale to gain insight into labor trends and costs across the enterprise to make more intelligent decisions on the direction of the company on a global scale. Test cases can be created to compare modules and sub-modules to see which are transferable to other locations of the organizations, and then run in tandem to determine functionality. Since enterprise systems are sold by the module, then allow for some customization on the customers part, the accounting module in general should be transferable to any locale, with some minor modifications to allow for local laws and practices, which saves the company money overall. It is far easier and less expensive to modify an existing module to allow for proper payroll calculations based on local laws, as an example, than it is to have the developer write a completely new module for each location that requires it and then figure out how to integrate that data without having to add too much to the already complex central database. This testing model applies to any aspect of an enterprise system: inventory, human resources, manufacturing, etc, but with different data sets. Having a global infrastructure also allows administrators to tap into collective resources to evaluate and gain feedback on any proposed update/upgrade. Sometimes, asking workers simple questions can eliminate the need for many costly test cases, which when performed in excess can actually result in project failure due to never really gaining momentum and being stuck in procedure or policy, as it were. Sometimes just listening to the users can be an administrators best test case, as long as they are willing to hear what is said and then make sure that the executives buy into the concept.



Have a question? Have a comment? Comment below, let's start a dialog.






~Geek

Sunday, October 7, 2012

Integrating Information System Knowledge

Integrating Informations Systems Knowledge within an Organization

The first discussion question for my last class went something like this:

"What is the best approach to integrating IS knowledge domain-specific and professional core competency needs with your organization? Why is this approach better than others?"

After reading my fellow classmates responses, and pondering my own perspectives on the question, here is what I came up with as my response:

The theme I see in my classmates responses mirror my own opinions on this topic - the best approach is to evaluate the IS resource to determine its usefulness, or kind of knowledge, and apply the resource based on the needs and goals of the organization.  As information systems have evolved into the complex and interconnected platforms of the modern era, more domain specific competencies have emerged, resulting in a need for more specialization from IT professionals.  My families business has been scaled down over the past two years due to the economic downturn  however we continue to thrive because of my unique understanding of information systems and how they can help us be more efficient with less knowledge workers for our type of business.  The reason why we remain successful is because of information systems.  Having grown up watching my father and grandfather build the business to what it was after over 60 years (dozens of workers, four locations around the globe, representing more than 100 major brand names for all aspects of construction and hospitality supply), being a part of the downsize effort was sobering.  Before the downsize, our core competencies were spread among departments, much like found at any corporation, based on the level of skill and domain specific knowledge.  Since I am the systems engineer it is my responsibility to connect the dissimilar and unconnected workers to each other   Understanding the core competencies for each position, and its importance to the whole of the company, coupled with my IT experience and system prowess, I was able to create a cohesive environment of wares and solutions that not only integrated all domain-specific workers to the larger whole, but also helped refine our core competencies to what they have evolved into today.  A mostly interchangeable workforce that can be effective and efficient across all knowledge bases.  As another classmate noted, it would be unconventional to have an IT tech move into the accounting role especially within a larger organization, but in small businesses workers need to have a mix of knowledge because more is required by less.  Integrating information systems is the only way to accomplish this with any level of organization and efficiency.  Without properly planned and implemented information systems, and skilled workers to use them, most companies would fail before they get going because they could not remain competitive in the market.  Staying "lean and mean" in modern business is the only way most companies are still around after the harsh economic downturn  and the low cost of technology has enabled them to stay open, as well as pave the way for start-ups to reach a profitable state faster than ever.  The trick is to plan, plan, and plan.  Going into a information system migration/integration takes careful planning, testing, redesigning, retesting, and finally implementation and maintenance.  Trying to integrate the same without a plan almost always results in catastrophic failures.

...damn I love this shit...

Do you have any thoughts on the subject?  Or any questions on my contribution?  Post a comment, let's discuss.


~Geek

Monday, August 16, 2010

Why won't my mobile device work right? Why can't I get a damn signal? WTF!!!

This came up recently in one of my Facebook banters so I thought I'd offer my 2 cents on the subject.


At one time or another, we have all sang this song.  Lately, especially with the release of the iPhone 4, this song is all too common.   Now, to be clear, I am not blaming the carriers for OEM's making bad devices, that would be ignorant.  It seems some people like combining issues into one picture, which is not my intent here.  A bad design is just that, and its mfg should correct or replace it at no charge (much like Apple did after the iPhone 4 fiasco).  What I am referring to specifically is signal loss.  Most of the problems people are having lately  are related to reception issues, regardless of what phone or carrier - no bars, dropped calls, dropped packets, etc. That is a carrier issue, not a OEM issue.  If the supporting infrastructure does not work properly, than the device will not function as expected (think of your office network when a server is down, similar concept).  My point is that we as users need to give the carriers time to implement their upgrades so that the networks function correctly with the newer and more demanding devices...then our devices will function as expected.  Not our problem right?  Wrong, we users set the demand, and in this case, supply is struggling to keep up.  Therefore we need to be patient and wait for the "market" to adjust.  I hope that will happen sooner than later because we users are not slowing down.  In fact, with companies migrating to cloud based services (quickly) mobile devices will become even more popular as they replace desktops and laptops in the business world, further adding to the already congested cellular data infrastructure.  The carriers know the complicated road before them and are rushing to make right...it just takes time.


Remember when 3G came out and how many complaints users had about dropped calls and not being able to access the Internet?  That was before everyone had a smartphone too...in that case, as is the case now, the network could not keep up with the users.  It took years for the network to catch up, and it is still not perfect today, though it is functional and healthy technology.


Since it will probably be asked, "what about the stupid antennae location problem with th eiPhone 4?"  The whole issue with the iPhone 4 was not necessarily because they put the antenna in a bad place, it was a combination of AT&T not being able to handle the increased data demand along with the current demand on its infrastructure AND the new software (iOS4.0) having a bug in the way it interpreted the cellular signal coming in.  That was quickly fixed with a software update (remember iOS 4.0.1) and Apple had started taking back iPhone 4's and swapping them out for 3GS's at no charge, even refunding the difference to make good with the customer.  AT&T still has work to do with regards to its signal strength to the masses, but so do Sprint, T-Mobile & Verizon.


Be patient my friends!