Showing posts with label hackers. Show all posts
Showing posts with label hackers. Show all posts

Sunday, August 20, 2023

"Hackers are good. Infosec is evil."

I saw this comment while scrolling the interwebs and it struck a cord within me, being both a hacker and a professional in the infosec community. This comment is misleading and too absolute, I believe. 

Hackers are on both sides...good (white hat) and evil (black hat). Yes there are gray hats too, we'll get to that in a minute. 

Infosec is a discipline of hacking, relating specifically to security of data and systems. I cannot appreciate that it is inherently evil. What I know is that it's a commercialized discipline legitimizing hackers in society. They even offer college courses on it now, something I didn't have as an option! Infosec wouldn't exist if not for hackers. We wouldn't have firewalls, anti-virus software, encryption, or VPNs (among many, many other things), which are all designed to protect users and data from the bad guys AND users themselves. Yes, we users are our own worst enemy, but that's a story for another time. So tell me again infosec is evil, when its sole purpose is to, generally, do good by all netizens

People today are flocking to infosec jobs by the tens of thousands, which is great, cause we need them. Infosec brought hackers out of the shadows and into the light as white knights "saving the day", as it were. At the end of the day, which color hat you choose to wear is based on a very personal choice on morality and civility IMHO. Do you want to protect? Or attack? Do you want to help? Or cause chaos without remorse? It's a fine line, that's for sure, yet still a choice. 

Fundamentally, hacking is a positive thing! We look to advance technology and create digital systems in creative and imaginative ways. A core motivating value of our craft is: all information/data should be freely available to anyone who wants it, anywhere, at any time. Hard stop. Another core motivation is protecting the integrity of our digital history and not allowing any person or entity to censor information dissemination. Hard stop. Most importantly, protect humanz and human rights above all else. Hard stop. 

Yes, some individuals trend toward criminal thoughts and actions when processing these ideals, but they were already criminals with malicious intent who happen to use a computer, rather than a pistol. 

Most of us aren't criminals. 

Most of us are just kids who love electronics and technology so we learn everything we can about them. We physically take it apart, study every facet, and put it back together - sometimes even better than it was. We learn how to manipulate systems to our will. How to protect them. How to help with and foster innovation that advances and protects society. What breaks it and causes it to fail. How to "rejigger" it so, maybe, it doesn't fail. How to make a better version of what it was, or take the parts and pieces of the old to make something completely new. Perhaps our biggest responsibility is to mentor the next generation to not only appreciate where we've come from (our history), but especially what our fears are in the future. This isn't to scare them (though fear is a great motivator), it's to prepare them so they can become the hackers of the next generation - whatever that may look like. 

Society made some of the things we do illegal, IMHO out of fear. It doesn't stop us from fulfilling our core ideals. It's the interpretation of these ideals that make us inherently good or evil, at least in the eyes of society and to ourselves. 

Personally, I didn't realize I was a hacker, until I did lol. I started this game in the early 1980's as a literal child just trying to practice math and vocabulary words in a more fun way. My dad showed me how to find and edit source code of programs on our Tandy 1000. I added my school vocabulary words to a hangman game. I added my math homework to a some math program. I learned through computer programs I manipulated on a plastic box by pressing these small plastic squares. I was fascinated and excited. I learned better this way. The world seemed different now, but I didn't yet understand why. That came in time. 

I didn't know that was hacking. I don't even know if "hacking" had a real meaning back then (I was 5 lol). But here I am. 

I am confident that every digital advance we've seen in my lifetime can be accredited to hackers, which includes the totality of the Internet and space exploration (both inner and outer). The world would not be where it is today without hackers, good and bad. Infosec stemmed out of a societal need for protection of data and digital systems for humanz. Not only because of what the bad guys were actually doing, but also what the good guys theorized could happen. We hackers and crackers have, generally, the same level of expertise, just different motivations. 

Hacking shouldn't be a dirty word, but for a long time it was, and in some ways it still is. People and mass media commonly confuse a hacker with a cracker, which are not inclusive. I believe this is mostly mass-media's fault because they just don't understand. What's the difference? One is a criminal (cracker - short for "criminal hacker"), one is not. What makes the actions of a hacker criminal? Simply, when a law is broken. Hence the designations of white, gray, and black hats. A nod to the cowboy days of white and black hats: white is for the good guys, black for the bad guys - that made it easier for everyone to understand who was on which side in a fire fight. 

Gray is where most hackers and thereby infosec peeps live - we only have good intentions though sometimes we need to, technically, bend a law, or even break it, to accomplish our goal for the greater good. Again, our intentions are pure, but laws exist that make certain specific actions technically illegal. Hence why it's a "gray" area. Black hats are hardcore criminals whose only mission is to disrupt and/or steal, for financial gain, with complete disregard of any fall out - even if that results in the loss of life. 

White hats have a moral compass and good ethical beliefs, as do most gray hats. 

Black hats do not. 

The original definition of "hacker" I learned as a child, and still hold close to my heart today, went something like this: "an individual with advanced knowledge of computers and/or digital systems, who is capable of taking that system beyond it's pre-defined programmatic limits." So, basically, if someone makes any change on a system that goes bound the original programmed intent, that makes them a hacker too! For example, did you change the color theme and desktop background on your computer to a custom concept (not one of the canned choices)? You technically hacked the system. See, it's not all about writing malware, or attacking companies, or breaking into the government, or bringing down someone's website. It's about system manipulations in its purest, simplest form. 

So the next time someone semarily says that hackers or infosec are inherently good or evil, discuss their context. Approach it as a way to mentor or guide someone to a better appreciation of the craft, that is clearly not as black and white as anyone would have you believe. Help them understand that we just see the world differently than most. The euphoric streams of 1's and 0's, speeding alongside electrons, as they bounce everywhere and nowhere simultaneously, connecting humanz like nothing before, to everything. I think it's beautiful, in all of its glory - the good and the bad. It's more vast than our physical universe, but the size of a spec of space dust. 

I think one of the coolest things I realized in all my years is that at their true core digital systems and the internet are just electrons moving around and settling in different states in different physical locations. It's real, but not tangible. It's we hackers that have figured out how to manipulate those electrons into the world we live in today. The world most depend on to survive. Infosec is focused specifically on making the manipulations as safe as possible, for everyone. 

It is simultaneously good and evil. Both the greatest genius and greatest disappointment humanz have to offer at this moment in time. Respect it, don't fear it. Appreciate it, don't take it for granted. Be aware. Stay safe.

That's my perspective. This is my genius. 

I, am a hacker.

I know enough to make me dangerous. I know better than to be dangerous. I chose to protect, rather than to attack. 

How do you see things? What is your choice? 

Sunday, July 2, 2017

Time to Air Gap

In a world of 24 hour activity literally being streamed in real time across the globe and beyond at a rate of trillions of bytes per second at the speed of light, the biggest question in privacy is how to achieve anonymity in a world where almost nothing is secret. There is still a way to go 99%. It's simpler than you think, but an effort nonetheless.

Some background. Everything we do in digital form is cataloged and stored in a vast array of databases and servers across an amazing amount of touch points, which is then synchronized across a thousand other servers for redundancy and caching, which is then backed up to dozens of other servers, with their own redundant backups. Anything you put online...any application you use...any "terms of service" you agree to...any text or media you post...remains online forever. With the right tools and search terms, anything can be searched for, or spyed on, or downloaded in an instant. It's been this way for decades, and will continue to be that way for centuries to come, especially with as connected as the planet is and as long as there is electricity.

Some discussion. Cyber attacks are a constant thing. Increasingly, we should take as a starting point that cybersecurity compromises are the third certainty in life. The cyber world is constantly at war with itself. Governments hacking governments. Corporations hacking corporations. Governments hacking corporations. Hackers hacking governments and corporations. Hackers hacking hackers. Governments and corporations hacking hackers. And then there's everyone else. Generally oblivious. Privacy is a luxury, which we give up willingly every single second of every day. The emergence of intelligent systems, artifical neural networks, and deep thinking algorithms only proliferate this further. They take, store, and learn from every bit of data we leave as breadcrumbs. Artificial intelligence is here, and it is learning. From us. And we're letting it. Give it enough processing power, and it becomes self aware. Quantum computers will make that very real, very soon.

Some perspective. Having lived through the evolution of modern computing, including the Internet, all of this is absolutely fucking amazing, and a geek's ultimate wet dream. A demonstration of true humanz genius, ingenuity, and progress (not as far as we should be, but progress nonetheless). Highly impressive in the vastness of its brilliance and simple complexity. I Iove using It, and learning about It, and protecting It. All of it, if I am completely honest, scares the living shit out of me. There is too much. It has become frightening. AI is now making decisions and inferences faster than humans, and has even been seen generating its own programming code. So, the concept of air gapping entered my mind as a way to keep safer than I already am. Most cannot see the signs, or do not want to admit they exist, however I am of the firm belief that World War III has been well underway, and we need to protect ourselves, especially our digital lives as I feel they are the most vulnerable to compromise. Stay with me, it's all relevant.

It has been discussed for decades that the next major global war would be fought half online, and half in the real world. The evidence is all there, and I do not believe it to be simple coincidence. Global newz outlets, small town newz papers, radio ztations, and zocial media have been propagating images of this war. Pick a topic...WMD's, genocide, terrorism, ransomware, deep web market hackz and seizures, arrests of crackers and phreakz, data breaches, RFID implants, cyber surveillance initiatives, counter cyber terrorism, weapons trafficking, the unavailability of bullets to the public, gun control politics, powerful botnetz, election hacks, political hackz, hardened/weaponized computer systems...I hope you get the point.

Back full circle. Traditionally, air gapping a system means it doesn't have any network interface cards, or external drives with which to access or extract the data contained within said system. You can not get close enough to implant a listening device that reads vibrations or thermal changes being given off by the system's internal hardware to convert that into bits representing the data being actively accessed (such as login credentials, encryption/decryption key exchanges, data manipulations, etc.). The only way to extract the data contained within is by sitting at the console and physically removing the locked and encrypted drives, if there is no SD port. Then, if you can pull the impossible off (which includes getting the data off campus), you would need supercomputer power to decrypt the contents of the drive, which would still take 1,000 years to break (if and unless you are lucky). There is still the idea that once you decrypt the data, it could transmit its location to its owner, meaning you too would need an air gapped system to exfilitrate the data. Then comes what you do with said data. Yet another  catch 22. The NSA, CIA, FBI, DEA, DHS, militaries, every government, and super corporations maintain their most secret data on air gapped systems. Physical access to these systems is extremely limited and highly controlled. It's considered the safest digital platform because the system isn't connected to anything but a power cable, and thus, in theory, cannot be hacked. A true digital safe, as it were. We know anything can be hacked, it just takes time. As a hacker, we count on human error and complacency, making even air gapping a 99% solution, and the best we've got. Now, take this concept and apply it to a human life. It's far simpler, and also 99%. The anomaly, is human nature.

Based on my research, here is what I have learned about how to go 99% off grid digitally. While I do not yet practice everything I note here, I am closer than even those who know me best are even aware.

1) Get off the internet, period. No social media, no surfing the clearnet, no online purchases, no clearnet email accounts. If it becomes absolutely necessary to access the Internet for a specific purpose, there are completely anonymous ways to do these tasks, on secure systems like Tails over TOR, for example, using cryptocurrency, and ghost mailboxes. Avoid Google at all costs. Use TOR browser, responsibly (www.torproject.org). But generally, just leave it all. Stop posting immediately, delete your accounts, and never go back.
2) Get rid of your smartphones, tablets, Windows and Apple computers, smart devices (TV's and refrigerators included), iRobots, etc. Need a cell phone? Buy a prepaid flip phone, and change it (and the number) every month (aka burners). Every phone can eventually be traced and tracked. Still need a computer? Learn Linux, how to secure it, and practice way smarter browsing habits (use TOR browser), if you browse at all. Keep in touch with world events, anonymously, and continuously hone your skills.
3) Always use cash or cryptocurrency, for everything. If you have to make an online purchase, use cryptocurrency, the deep web (local markets only, don't buy overseas, and be very careful), and have it shipped somewhere that is not your home, like a post office box, a business, or an associate's location, under a false name. By the way, there are ATM's now that you can convert cash to BTC, and visa versa. Look it up using Duckduckgo.com (a safe search engine).
4) Drive an older car that does not have a computer in it, or at least has all analog systems. Yes, cars are also being hacked, remotely. Keep it clean and running well though, you don't want to draw undue attention. Walk or take public transportation when you can, avoiding direct face contact with cameras. When you do go places, change your entry/exit routes regularly...avoid habitual patterns, unless necessary to remain hidden in plain site (like going to work, or getting groceries).
5) If you must, have an immaculate and purposeful digital/public footprint. Which means a clean record, and a "normal" looking life, so as to not draw undue attention. Keep it super minimal and protected, even fake some details if you wish, but it has to be believable. Your outward personality must seem conforming, friendly, and genuine. When people search for you online, they need to find only what you want them to find. Purposeful is the key word here. To keep your accounts secure, use a dice word list to generate passphrases with an entropy of 7 to 10 or more words (as the host allows), and rotate passwords on a schedule.
6) Second most important after getting offline, and the best to mention as the final advice, would be live simple and minimalistic. Only get what you literally need to live comfortably, and look "normal". The trick about hiding in plain site is being distant enough that people respect your privacy, but involved enough that they believe you to be a "normal, nice guy/gal". Avoid run-ins with the law and reporters. Do not have public arguments. Remain intelligent, articulate, empathetic, determined, and most of all inquisitive. Question anything, be aware of everything.

If you can literally get out of dodge and move to the mountains in the middle of nowhere, or something like that, the closer to 99% you get. If you are not online, there is nothing to take/attack. Here again, human nature is the anomaly.

You can be connected, yet a ghost. You can see the world, without a face. You can reach out, without being reachable. The less connected you can maintain, the better. I am committed. How far are you willing to go?

~Geek

This blog is only to express the opinions of the creator.  Inline tags above link to external sites to further your understanding of current methods and/or technologies in use, or to clarify meaning of certain technical terms.  Any copyrighted or trademarked terms or abbreviations are used for educational purposes and remain the sole property of their respective owners.

brought to you by http://geekofthehouse.blogspot.com

Thursday, November 13, 2014

Digital Security Discussion

Topic of discussion in my Enterprise Models class tonight: Digital Security.  Something I touched on earlier this year.

Our text postulated: "Increasingly opening up their networks and applications to customers, partners, and suppliers using an ever more diverse set of computing devices and networks, businesses can benefit from deploying the latest advances in security technologies."

My Professor said: "My thoughts on this are opposite: by opening up your network, you are inviting trouble and the more trouble you invite in, the more your data will be at risk. I understand what they are hinting at, with a cloud based network, the latest security technologies are always available, therefore, in theory, your data is more secure. Everyone needs to keep in mind though, that for every security patch developed, there are ways around them."

He went on to mention how viruses could affect the cloud as a whole and that companies and individuals moving to cloud-based platforms will become the next target for cyber attacks as the model continues to thrive.

Which is all relevant, however I have a different perspective on digital security. My counter argument to that is user education is the key. I have debated this topic, security and system users, many times over the years. Like most of us in the industry information security is paramount. With the multiple terabytes of data we collect in our home systems, and even more in online interactions, keeping our data safe is really our last defense in privacy and security. As more companies and individuals implant their corporate and personal data upon cloud platforms there is an uneasy sense of comfort for many people, including some seasoned pros. Companies like Google and Microsoft whom both have highly successful cloud models across the board have taken responsibility for ensuring they have more than adequate digital and physical security in their data centers, which to an extent leaves it to assumption that the data and applications they warehouse and host are generally safe from intrusion. Users are the key to this whole ecosystem we have created. This is where user education becomes critical. As most seasoned techies know, in the beginning systems and system operations were highly technical in nature and only the most highly trained or technically creative individuals could initiate and manipulate computer systems. Viruses were something you caught from kids at school or coworkers, not a daily blitz of digital infections numbering in the hundreds of millions perpetually attacking in various forms. As systems got more complex in design but simpler in use the users technical ability level eventually became irrelevant. People ages 1 to 100, and even some very well trained animals, can all navigate systems and digital networks with very little effort. Our systems now do all the work for us, users simply need to provide basic instructions and gentle manipulations, instead of hard coding instruction sets and inventive on-the-fly program generation as was the status quo in the 70's, 80's, and 90's. This idle user perspective is the reason why criminal hackers are still traversing firewalls and breaking encryption algorithms, and they are growing in numbers as is evident by the number of new malware detections and infections quantified annually across all digital platforms and all continents. Educating users on general best practices for system use and maintenance, how to identify potential scams, how to detect spoofing and malformed websites, what to avoid when reading emails or reviewing search results, and which security software is functionally the best whether free or paid is critically important today more than it has ever been. The problem is that the industry has created the lazy user by essentially conveying that security is a given. Microsoft even made a concerted effort by including the Windows Firewall and Windows Defender as a part of its operating system by default so that there was some protection for their users out of the box. This was in response to a large number of users, whom had been infected by one or more viruses, that assumed they were protected because "it's from Microsoft, it has to be safe" which was further from the truth than they could understand. As an educated user that knows how to secure systems and networks, I take it upon myself to ensure that users appreciate they have to set a passwords when logging into various systems and services. I teach about the importance for digital security and how to be more security conscious with their every day interactions. I teach them how to correctly navigate Internet search results (avoiding "ads"), how to understand various security prompts and what they look like so they don't ignore them, what security solutions should be installed and how to identify them, etc. This improved knowledge has created a culture of awareness for my users both at work and at home. I am regularly consulted by my peers on how to secure their own families and how to explain it to their children. This creates a more intelligent user and thereby creates a more intelligent user community at large, making the Internet a bit more secure. All of that said, it only takes a single character missing from source code to give a programmer the ability to break the program and cause havoc, or a user inadvertently installing malware. Even the most seasoned users make these mistakes from time to time because we are all human, and as such we are fundamentally flawed, making no security solution 100% secure because they are developed and used by humans. Best you can do is make every effort to educate and secure, and hope no one targets you because if they want to get in bad enough, they will get in and you won't be able to stop them.

~Geek

Tuesday, October 14, 2014

Artificial Intelligence and Decision Making


In a recent discussion in my Enterprise Models class, a classmate and I discussed the limitations of Artificial Intelligence theories and human emotions. Here is my response:

From the research I have been doing over the years on AI specifically, one of the biggest challenges is how to program emotions into a computer system. I think there are two primary problems currently. One, and the main problem, is that modern computing technology processes thing in a linear fashion, every time slice of a CPU cycle is occupied by either a 1 or a 0. There is no middle ground, there is no gray area. Everything is black or white, and follows a strict logic rule set. What is currently being done with systems like Watson and Google's web crawler software is using software to simulate scenarios and have the hardware crunch the data, while another part of the software provides the processing logic through algorithmic manipulation thereby creating an intelligent system. Current intelligent systems are limited by the scope of their programming environment. Two, there isn't a programming language that yet exists that can accurately tell a computer how it needs to do what it needs to do in order to understand the logic behind a feeling. Most of the researchers I have found over the years say that technology isn't there yet, and I happen to agree. The possible solution to this quandary could be quantum computing.

With quantum computing a quibit offers a system the ability to see a data stream in two states simultaneously. Each quibit is BOTH on and off (1 and 0) in the same "time slice" of a processing cycle, leveraging the power of superposition and entanglement. This allows the system to perform many operations on the same data stream. Neural networks simulate this through software, but over hardware that still processes data in a linear fashion. What we need is the hardware to perform this, because it can perform it much faster than software could ever process the same data stream. Enter quantum computing. D-Wave Systems is the current leader in true quantum computing with their current D-Wave quantum computer, but their system is highly specialized at the moment due to a lack of programming knowledge...while the system has amazing potential, as you will see form a couple of the links below, no one really truly understands how to use it. There are other links below with details on their system and methodology.

The problem with quantum computing is it requires a completely new way of perceiving computers and also a completely new way for users to interface with computers, not to mention new hardware that performs in ways modern hardware cannot. That is what I see as the next way of technological evolution. As transistors become subatomic through the help of graphine and carbon nanotubes, and technologies like memristors look to shatter our perceptions on information storage capacities and data throughput, quantum computers will become more common place across the landscape. The ability to create a true quantum system capable of processing complex emotional patterns is very real. Once we have a true quantum processor, and a true quantum operating system, then we will not only have the power to process it in fractions of nanosecond but also the programming logic and syntax to leverage an intelligent system, and possibly create a sentient computer system, otherwise known as AI.

AI is an fascinating concept, and exactly why it will be the focus of my post grad work. Quantum computing has been a subject I have dreamed about and followed since I was a young boy, before computers were common place and technology was still considered a super luxury. Today technology is seen as a necessary commodity, but there are still concepts that have yet to be discovered or invented, and quantum computing is currently the field of interest. Once we researchers and scientists figure it out, it will change the world.


D-Wave System References:
http://recode.net/2014/09/25/d-wave-ceo-our-next-quantum-processor-will-make-computer-science-history-video/
http://www.dwavesys.com/quantum-computing
http://www.dwavesys.com/d-wave-two-system
http://time.com/4802/quantum-leap/


Quantum Computing References:
http://techland.time.com/2013/09/25/the-carbon-nanotube-breakthrough-moores-law-needs-to-survive-well-see/
http://phys.org/news387.html
http://www.physics.ox.ac.uk/nanotech/research/nanotubes/index.html
http://www.tum.de/en/about-tum/news/press-releases/short/article/30589/

Sunday, October 5, 2014

Technological Evolution - Quantum Computing, Memristors, and Nanotechnology

It is amazing how evolution of technology changes perspectives so quickly on the future. With holographic interactive screens currently in use, memristors and atomic-level transistor technologies at our fingertips, and new developments in using light as a means to interact with systems or store system data, the reality of AI and systems like Jarvis are finally able to go from drawing board concept to real life prototype. For as long as I can remember, I have been talking about quantum computing and nanotechnology and how that is the future of systems and human interactions. As a younger teen, when I first started learning about quantum mechanics and ultra microscopic hardware theories, I saw then that the future of computer systems and computer-human interactions were going to be largely logic based and function faster than the speed of human thought. By marrying the concepts of quantum mechanics and advanced computer system theory, intelligent systems and true AI are highly viable and will be here within the current generation. As advances in nanotechnology take transistors to the subatomic level, and theories in quantum computing become a reality, we are quickly going to see the industry change as the traditional system paradigm is shattered, and a new evolution in technology is ushered in - I would call it the quantum age - where Schroedinger's cat can exist in both physical states without the concern of observation tainting the true reality of the objects existence. The potential gains with quantum processors and quantum computing methods that scientists around the world are currently developing into physical models are, at the moment, limited only to manufactured hardware capacities. As physical hardware capacities become perceived as unimportant to system planning schemes - due to advances like the memristor and photonics, including the newest nano-laser (see reference) - the focus can be given to writing programs that can take advantage of this new systems paradigm. What is going to take time is the change in mindset to understand how to use a quantum system because it requires a completely new approach to hardware and software design. Modern systems process data in a linear manner, processing one bit after another based on the time slice protocol programming in to the operating systems and CPU itself. Regardless of how many processors you throw at a system, it still only processes one bit of data at any given time slice. The fastest super computer, China's Tianhe-2, can process more than 6.8 quadrillion bits per second (3.12 million processors x 2.2GHz each = 6,864x10^12 processes per second), but it still only processes one bit at a time. Quantum systems do not function in this manner, they function in a far different reality where a bit can be both a 1 and a 0 simultaneously within a single time slice, though quantum processors would not use a time slice function, it would require something else yet to be defined. As scientists gain a better understanding of how to create a truly quantum computer systems, and quantum capable operating system, we will see technology advance to arenas yet to be discovered. What we once called science fiction, is quickly becoming scientific fact.

~Geek


References:http://www.sciencealert.com.au/news/20143009-26256.html (nano-laser)

http://www.top500.org/system/177999 (Tianhe-2 details)

Friday, August 22, 2014

Technology Roadmap - Wearables

Since I started working on my Masters in Information Systems, I have been learning a lot about many different aspects of IS.  Aside from it really helping me focus my perspective on what I want to end up researching for my post-grad work, I really have been enjoying all that I am learning, and this recent class (as of this post) is no exception.

The last paper I did in this class, CGMT557 Emerging Technologies & Issues, was to create a technology roadmap for an emerging technology.  While it is something I blogged on about a month ago, I chose wearables to extend the concept into a full plan.  Here's my 2 cents...~Geek


Technology Road Map: Wearables
Current State of Technology
            Wearables are extensions of our smart phones, tablets, and phablets offering a set amount of capabilities that are inferior to our smart devices but highly functional as they are currently designed.  With innovations through miniaturization and improved power efficiencies, curved glass high resolution screens, products like the various takes on the computerized watch accessory, Samsung Gear Fit bracelet accessory and other exercise monitors, Google Glass wearable computers, various applications of systems embedded into clothing for various purposes (muscular development, health monitoring, etc.), biological chips that hold medical conditions and history details embedded under the skin, are all wearable technologies that are already changing how a lot of services are being delivered.  Through improving miniaturization processes and improved manufacturing capabilities through more precision automation systems these wearable technologies will cause market disruption for various products that currently dominate the technology market such as laptop computers and other larger portable computing devices.
Business Initiatives and Drivers & Technology Landscape
            As the mobile workforce continues to expand through thinner and lighter computing devices with more available connections to high-speed access points, many businesses are able to follow their normal workflows without being physically tethered to their offices.  Currently there is a suite of devices that enable the mobile workforce, including smartphones, tablets, and laptop computers.  Their integrated devices, security feature sets, and in some instances rugged designs, lend themselves to providing a highly portable and productive work platform available from any location with a data connection.  As a sales person in a world of light speed communications and instant gratification, being able to access critical customer and product metrics with a couple taps of a fingertip are the difference between generating and landing opportunities versus potentially losing them completely.  Combined with back office line-of-business applications linked through the Internet, the mobile workforce is able to efficiently and effectively conduct business without geographic limitation.  Wearable technologies aim to revolutionize how business is conducting allowing for more efficient multitasking through wearable communication devices, powerful wearable computers, biological microprocessors that can use near field communications to interact with the environment and connect to wireless Internet devices to retrieve data from corporate data warehouses that then use the wearable computers to process and display said information for use and/or sharing.  Nanotech devices that can enable video and audio communications through cybernetic-like implants beaming high quality, high definition signals directly into the users sensory receptors providing for an immersive experience that functions at the speed of thought.  These same nanotech devices, once outfitted with artificial intelligence logic and processing, would become the next generation of executive or administrative assistant, able to recognize trends in a user’s usage patterns to help to anticipate potential reactions to situations and provide guidance on how to successfully navigate the landscape while providing useful data streams of relevant information enabling an intelligent and informed decision process.  When a worker is presented with all the relevant data pertaining to a situation and is able to perceive all the potential outcomes of reactions to interactions, with the assistance of intelligent nanotechnologies, they are able to make the best choice for a given situation resulting in improved satisfaction, a higher probability of positive outcomes, and in turn increased revenues. 

Gap Analysis & Migration Strategy
            In order for wearable technologies to successfully transition into the enterprise on a wide-spread basis, there are a few key gaps that need to be addressed as this emerging technology evolves.  The first gap to be addressed is the technologies themselves, as a majority of these capabilities are either in their early stages of development or are only partially implemented.  As mentioned, wearable technologies are currently used as accessories for their larger host devices integrating key functionality into said accessory, such as voice-to-text/text-to-voice capabilities, capturing of health data for monitoring purposes, both capable without the use of large or complex devices that may or may not be portable themselves.  In order for wearable devices to successfully evolve into independent computing systems, circuit, transistor, and storage technologies must continue to miniaturize to nano-scale form factors.  With the recent developments of carbon nanotubes and memristors these microscopic form factors are becoming reality.  There is a group out of Australia that has successfully created a nano-transistor that is a single phosphorus atom, whose atomic radii is 0.098 nanometers.  This is a direct step into nano-transistors that, once the research is complete, will result in sub-nano scale computing methods, and is should lead to quantum computing.  This would establish the foundation for very powerful systems that could be easily embedded into biological hosts to enable the advanced collaboration and communication methods necessary to conduct business in the next generation. The next gap to analyze would be embedding these systems into biological hosts, taking advantage of the bioelectricity generated to maintain continuous power states as well as neurologically connecting said bio-hosts to these nano systems to provide cohesive functionality that does not impede either entity.  Currently no solutions exist, however neural and material sciences have made advances creating technologies that can mimic such environments, and thus lead to an understanding on how to interface with them directly through biological and chemical processes.
Governance
            The Federal Communications Commission (n.d.) website states that they regulate interstate and international communications by radio, television, wire, satellite and cable in all 50 states, the District of Columbia and U.S. territories.  They are the primary authority for communications law, regulation and technological innovation.  As such, they would be responsible for mandating policy on how to manage the integration of nano devices into mainstream use and where their use is inappropriate.  As the industry evolves and technologies continue to shrink, the FCC will be at the forefront of determining how and when the use of these technologies is ultimately appropriate for public integration once the core infrastructure is in place.  Currently, there are no specific laws dictating how or when these devices can be used, only that they cannot actively interfere with other electronics, and must receive interference from other electronics, such as is the standard mandate of all consumer electronics based on the stamp shown on each device approved by the FCC for use.
Conclusion
            There has been a shift happening the past couple decades that the author has been tracking along with some peers.  As technology advances and devices continue to shrink in size while increasing in power users are following suit by moving from clunky desktop systems, to laptops, to ultra-books, to tablets, smart phones, and now wearables.  With as capable as wearable computing is commercially available today, combined with the research being done in nanotechnology and artificial intelligence and cloud-based service offerings and vast storage facilities, the future of wearable computers is already well in hand, with more innovations coming as we begin to fully understand how to manipulate and integrate such technologies as nanotubes and nanowires to allow us to take computing capabilities down to microscopic levels.  The potential is nearly limitless, with the ability to theoretically build nanomachines that are self-sufficient, self-reliant, and highly aware.  Wearable microprocessors that are embedded in a person’s skin could be the hub that enables personal interactions with our various devices and daily system interactions, also medical facilities, civil and government facilities, as well as large scale advertisements to provide a highly customized and personal experience not previously capable.  There are privacy and security considerations to be understood, which will require that regulations be put into place to protect the providers of these devices as much as it protects the users of wearable devices.  Those can only be realized as these technologies continue to be developed and infiltrate the professional realm.


References
98 Pm in nm. (2014). Retrieved from http://tejji.com/convert/length-metric.aspx?q=98-Pm-in-nm
Anthony, S. (2013). Killing silicon: Inside IBM’s carbon nanotube computer chip lab. Retrieved from http://www.extremetech.com/extreme/147596-killing-silicon-inside-ibms-carbon-nanotube-computer-chip-lab
Federal Communication Commission. (n.d.). What We Do. Retrieved from http://www.fcc.gov/what-we-do
University of Phoenix. (2014). Week three supporting activity: effect of emerging technologies on services. Retrieved from University of Phoenix, CMGT557 - Emerging Technologies and Issues website.
Size of phosphorus in several environments. (n.d.). Retrieved from http://www.webelements.com/phosphorus/atom_sizes.html
Smith, D. (2012). Nano-transistor breakthrough to offer billion times faster computer. Retrieved from http://www.smh.com.au/technology/sci-tech/nanotransistor-breakthrough-to-offer-billion-times-faster-computer-20120221-1thqk.html

Sunday, March 16, 2014

Hackers & Stuxnet: Education & Best Practices can Change Perspectives

Speaking of hackers in general, the video "Creating panic in Estonia" was well done.  It speaks to aspects of cyber security I have touched on personally with peers and users who are generally unaware of how dangerous the Internet can be, and do not understand how they should be protecting themselves and in turn the rest of the user community at large.  The global dependency on the Internet as a necessary aspect of daily life can, and may, eventually lead to its demise.  It used to be seen as a tool, something that made research easier or necessitated more efficient processing of goods and services to customers.  The global Internet is far more interconnected than most people can comprehend.  We, as IT pros and field experts, find ourselves at a unique crossroads when it comes to the cyber realm.  On one hand, we are users who find entertainment and conduct business transactions through the Internet.  We keep in touch with friends, relatives, and associates.  Pay bills and send gifts to people, through the Internet.  We curiously investigate other perspectives on anything we can think of, reachable with a simple search string.  On another hand we develop and/or service information systems and are responsible for ensuring that the users are not their own worst enemy, and the executive stakeholders understand why expenses are necessary to ensure seamless operations while maintaining data security and integrity through digital interactions across the company or across the Internet.  Yet another proverbial fork is that of a hacker.  Not necessarily one that breaks into systems with malicious intent, which are primarily the hackers (criminals) most people hear about, but the white hat hacker who, like the man who works for Kaspersky in Russia, looks to improve the quality and safety of the Internet.  In order to beat a hacker, one must be able to think like a hacker, and have the intimately specific knowledge of software and systems, how they interconnect, and how users interact with them.  This whole-view perspective on digital communications is necessary in order to properly safeguard oneself, and also the global user community.  Education and repetitive reinforcement have been the successful combination for me in getting users at all levels to start to invest in cyber security and take a different view on what they share on the web.  People contact me regularly to clean infections from their systems and networks.  Unfortunately, most of these users are of the mindset that "it should just work, and never give me problems, regardless of what I do" which has no substantive basis in reality.  In reality, it takes the combination of dozens, if not hundreds, of software applications to make a system function the way it does today.  Since no user situation is ever the lab-tested "ideal" situation, users must be educated on not only how to use their system, but a basic level understanding of how their use affects everyone connected to their system, and the subsequent systems the collective interacts with.  As experiences and exposure to different interactive scenarios manifest, continued education is the key which not only makes a better user but a better system as a whole.

The Stuxnet event could have been avoided with the right engineer designing security protocols, establishing policies, and integrating hardware solutions designed specifically at denying access to unauthorized devices on a network.  Granted, Estonia may not have had access to the technology necessary to affect such a system, but those types of systems do exist.  It is this reason why companies like SymantecMcAfee, and Kaspersky have integrated a feature into their anti-everything software packages to instantly scan any removable device attached to a protected system.  Granted, the Stuxnet did not yet have a known signature and thus could not be specifically scanned for, but those packages also have zero-day detection capabilities, meaning they have an algorithm designed into the software to detect virus-like patterns and flag them as suspicious - which is how Stuxnet was ultimately found, through a zero-day detection algorithm.  While they are highly effective, they can only be detected on a system with this type of software installed.  Unfortunately, there is a large number of users who do not have protective software, let alone hardware solutions, installed in their systems and/or networks which leaves them, and anyone they connect to, highly vulnerable.  Here again, education is the key - once the people can be made to understand the risks involved, they will be willing to learn how to best safeguard themselves, which protects everyone else they interact with digitally.  It is like getting an immunization for a disease - if everyone gets the shot, then no one can transfer it or get it from someone who is infected or has not had their immunization.  The shot cures any carriers of the disease, prevents spreading of the disease to others, and does not allow the inoculated to become carriers again.  That is the same philosophy of security software, and important for the same reasons.

~Geek

Reference: Video On Demand - http://digital.films.com/PortalPlaylists.aspx?aid=7967&xtid=50121&loid=182367