The Age of Information

The Information Age (also known as the Computer AgeDigital Age, or New Media Age) is a historic period in the 21st century characterized by the rapid shift from traditional industry that the Industrial Revolution brought through industrialization, to an economy based on information technology.[citation needed] The onset of the Information Age is associated[by whom?] with the Digital Revolution, just as the Industrial Revolution marked the onset of the Industrial Age.[1] The definition of what “digital” means (or what “information” means) continues to change over time as new technologies, user devices, methods of interaction with other humans and devices enter the domain of research, development and market launch.

During the Information Age, digital industry shapes a knowledge-based society surrounded by a high-tech global economy that exerts influence on how the manufacturing and service sectors operate in an efficient and convenient way. In a commercialized society, the information industry can allow individuals to explore their personalized needs, therefore simplifying the procedure of making decisions for transactions and significantly lowering costs both for producers and for buyers. This[clarification needed] is accepted overwhelmingly by participants throughout the entire economic activities for efficacy purposes, and new economic incentives would[original research?]then be indigenously encouraged, such as the knowledge economy.[2][page needed]

The Information Age formed by capitalizing on computer microminiaturization advances.[3] This evolution of technology in daily life and social organization has led to the modernization of information and communication processes becoming the driving force of social evolution.

Progression

Library expansion

Library expansion was calculated in 1945 by Fremont Rider to double in capacity every 16 years, if sufficient space were made available.[6] He advocated replacing bulky, decaying printed works with miniaturized microform analog photographs, which could be duplicated on-demand for library patrons or other institutions. He did not foresee the digital technology that would follow decades later to replace analog microform with digital imaging, storage, and transmission media. Automated, potentially lossless digital technologies allowed vast increases in the rapidity of information growth. Moore’s law, which was formulated around 1965, calculated that the number of transistors in a dense integrated circuit doubles approximately every two years.[7]

The proliferation of the smaller and less expensive personal computers and improvements in computing power by the early 1980s resulted in a sudden access to and ability to share and store information for increasing numbers of workers. Connectivity between computers within companies led to the ability of workers at different levels to access greater amounts of information.

Information storage

The world’s technological capacity to store information grew from 2.6 (optimally compressed) exabytes in 1986 to 15.8 in 1993, over 54.5 in 2000, and to 295 (optimally compressed) exabytes in 2007. This is the informational equivalent to less than one 730-MB CD-ROM per person in 1986 (539 MB per person), roughly 4 CD-ROM per person of 1993, 12 CD-ROM per person in the year 2000, and almost 61 CD-ROM per person in 2007.[8] It is estimated that the world’s capacity to store information has reached 5 zettabytes in 2014.[9] This is the informational equivalent of 4,500 stacks of printed books from the earth to the sun.

Information transmission

The world’s technological capacity to receive information through one-way broadcast networks was 432 exabytes of (optimally compressed) information in 1986, 715 (optimally compressed) exabytes in 1993, 1.2 (optimally compressed) zettabytes in 2000, and 1.9 zettabytes in 2007 (this is the information equivalent of 174 newspapers per person per day).[8] The world’s effective capacity to exchange information through two-way telecommunication networks was 281 petabytes of (optimally compressed) information in 1986, 471 petabytes in 1993, 2.2 (optimally compressed) exabytes in 2000, and 65 (optimally compressed) exabytes in 2007 (this is the information equivalent of 6 newspapers per person per day).[8] In the 1990s, the spread of the Internet caused a sudden leap in access to and ability to share information in businesses and homes globally. Technology was developing so quickly that a computer costing $3000 in 1997 would cost $2000 two years later and $1000 the following year.

Computation

The world’s technological capacity to compute information with humanly guided general-purpose computers grew from 3.0 × 108 MIPS in 1986, to 4.4 × 109 MIPS in 1993, 2.9 × 1011 MIPS in 2000 to 6.4 × 1012 MIPS in 2007.[8] An article in the recognized Journal Trends in Ecology and Evolution reports that by now digital technology “has vastly exceeded the cognitive capacity of any single human being and has done so a decade earlier than predicted. In terms of capacity, there are two measures of importance: the number of operations a system can perform and the amount of information that can be stored. The number of synaptic operations per second in a human brain has been estimated to lie between 10^15 and 10^17. While this number is impressive, even in 2007 humanity’s general-purpose computers were capable of performing well over 10^18 instructions per second. Estimates suggest that the storage capacity of an individual human brain is about 10^12 bytes. On a per capita basis, this is matched by current digital storage (5×10^21 bytes per 7.2×10^9 people)”.[9]

Relation to economics

Eventually, Information and Communication Technology—computers, computerized machinery, fiber optics, communication satellites, internet, and other ICT tools—became a significant part of the economy. Microcomputers were developed and many businesses and industries were greatly changed by ICT.[citation needed]

Nicholas Negroponte captured the essence of these changes in his 1995 book, Being Digital.[10] His book discusses similarities and differences between products made of atoms and products made of bits. In essence, a copy of a product made of bits can be made cheaply and quickly, and shipped across the country or internationally quickly and at very low cost.


The flow of information flows both ways.

Below are some negative aspects of this information age…

Smart televisions

Samsung hit the headlines when has Smart TVs were accused of being a little too smart for their own good.

The device’s privacy policy suggested that users shouldn’t discuss any sensitive topics whilst the television was plugged in, as the information could be transmitted to a third party. Of course, this generated concerns, as users felt their right to privacy was being encroached upon.

However, the concerns were confidently dismissed and Samsung assured users their fears were unfounded. Samsung argued the smart TVs weren’t always listening and that they were on a ‘standby’ mode until activated by a pre-programmed phrase.

So logically speaking, how does the smart TV know it has heard a pre-programmed phrase? The microphone must be on, so that ambient sounds and the pre-programmed phrases can be compared. We already know the device can transmit data over the Internet. The real issue here is whether or not data can be transmitted at the wrong time, to the wrong people. What if there was a simple bug that kept the microphone from shutting off, once it’s turned on?

Additionally, some information, like IP addresses and other stored data may be transmitted as well. According to Samsung, its speech recognition technology can also recognise regional dialects and accents, among other things, in order to enhance the user experience.

To achieve this, smart television makers often employ third-party applications and servers to process the data received, though this is encrypted during transmission and not retained or for sale – at least according to the company’s privacy policy.

So the actual question is: can we trust that the encryption is done correctly, and nobody has stolen the keys? Can we trust that the third parties doing natural language processing haven’t been compromised?

Smart phones

Conspiracy theories needn’t be complex. Most people have an understanding that there is a potential threat when using a smart phone in a public place, but there is often an element of misunderstanding in this.

For example, some people believe that simply charging a device via a wall socket can result in stolen data – the premise is there, but the understanding is skewed. Stealing data in this manner is nigh on impossible.

Similarly, recent videos on YouTube and Facebook claimed that stickers affixed to mobile phone batteries were used for covert data collection and transmission, when in fact they are merely Near Field Communication (NFC) transmitters. Removing these stickers – as suggested by these videos – would subsequently render the device useless for apps using NFC, Apple Pay and Google Wallet for example.

One of the few scenarios where NFC could contribute to the compromising of personal data is if a device is in the vicinity of another user who could transmit malicious code. This could then exploit known vulnerabilities in a document reader or browser, or even the operating system itself.

Smart meters

Smart meters are top of the list when it comes to security and privacy concerns. Privacy activists over the years have argued that smart meters open up opportunities for companies to collect detailed personal information such as the television shows they watch, the appliances they use, the music they listen to, the websites they visit and their banking habits.

These conspiracy theorists claim that smart meters really are too smart and are used as the ultimate spying tool, making the electrical grid and the utilities that run it the ultimate spies.

Often people do have the right intuitions, without being technically accurate. Their concerns aren’t completely absurd. Different appliances use different amounts of power and so when power consumption is higher, smart meters can identify what devices and appliances might be being used in a house.

Although this isn’t the same as smart meters knowing what you’re watching on TV or cooking for dinner it is understandable that people are wary of their right to privacy and want to make sure that it isn’t being infringed upon.

What is clear is that as the IoT world expands, consumers and businesses alike are inevitably going to be more aware of what this means for their privacy. Technological advancements mean that there are new unknown areas in technology that are challenging to navigate and no doubt, new conspiracy theories surrounding the IoT will arise which organisations will have to address.

Businesses are right to be concerned about the security of the IoT and these examples show that the intuition and suspicion in not trusting them is correct. What needs to change is the understanding of the technicalities in these technologies.

Leave a Reply

%d bloggers like this: