Monday, June 23, 2014

A Vast Machine

1. The process of knowledge formation in a sociotechnical system.

When you are on a quest to find the answer to a particular question, you might find what you think suffices as an informational explanation to your inquiry. However, Edwards says the ultimate question to ask yourself repeatedly when desiring accurate knowledge is, “How do you know?” Before you know it, you are on a fact-finding mission that will hopefully lead you to the source of evidence. Moreover, inserting the five W’s (who, what, where, why, when) will refine your results further to the point of defining evidence itself! In his book, Edwards utilizes the “How do you know?” process to analyze climate change and the various assumptions regarding global warming. There are many variables associated with collecting weather data from the past and the present including differences in instruments, varying observation hours, and alternate calculations. Attempting to eliminate all the different variables is part of the knowledge purification process. Edwards describes his book as an “historical account of climate science as a global knowledge infrastructure.” (Edwards, pg 8) Many people argue that global warming is based on model predictions, and therefore void of sound evidence. Climate models are used in his book to show that you can arrive at a conclusive reality for global warming and even project future trends from simulation models. The presence of technology among intricate relationships between complex infrastructures and human behavior can be optimized through strategic organization (infrastructure). If we are to arrive at knowledge formation more concrete than probabilistic predictions, the existence of infrastructures and models is essential. After all, “Without models, there are no data.” (Edwards) On the other hand, the more data you have, the better your models.


2. The concept of "vast machine" and knowledge infrastructure. 

Edwards defines “a vast machine” as “a sociotechnical system that collects data, models physical processes, tests theories, and ultimately generates a widely shared understanding of climate and climate change.” (Edwards, pg. 8) So what is the difference between a technical system and a sociotechnical system? Mainly the foundation of a sociotechnical system is rooted in social elements, meaning networks of people, places, and things. It is within these networks that knowledge is created and shared as a communal effort.

In the book, the Large Technical Systems (LTS) model addresses the different phases in which infrastructure is formed:

1. Invention  
2. Development and innovation 
3. Technology transfer, growth, and competition 
4. Consolidation 
5. Splintering or fragmentation 
6. Decline

Some of the systems that evolve during these stages are eliminated in a process similar to survival of the fittest. Other systems that survive the stages can be linked together to serve a greater need. The most important thing to remember is that infrastructures are networks, meaning they come with their own set of management difficulties or “tensions.” (Edwards, pg 12) With regards to climate infrastructures, scientists established an international network designated to collect weather data from all parts of the world. The challenge was coordinating all the different weather data systems into a single global climate information infrastructure (or observing system).


3. The basis of scientific knowledge. 

Edwards proposes on page 16 that in order to understand knowledge, one must understand:

How data gets moved around
How they get created
How they are transformed into reliable information
How that information becomes knowledge

Producing and cultivating scientific knowledge requires not only tools for research and media to share it but enough connectedness to warrant informational legitimacy. Data itself is not necessarily the basis of scientific knowledge, just like instrument readings are not the foundation of weather forecasting. Preliminary data is definitely utilized when generating forecasts, but models (specifically computer models) are largely responsible for predictions. To accommodate for the revision of models over time, climatologists use “reanalysis,” which reconciles data taken over long periods of time. Any sort of discrepancy is discarded or adjusted according to the analysis model. Once again, infrastructure is the basis of scientific knowledge, and without it, knowledge would be unreliable and erratic.


4. The concept of globalist information.

The notion of globalist information begins with the conceptualization of earth as a permeable, mutually interdependent community comprised of a collection of information about the whole world. In this sense, the world is full of systems of conducting information around the world as well as systems generating information about the world. These network systems evolved from journalism and postal mail to global environmental monitoring. Edwards decides to concentrate on the oldest globalist information system, the weather data network. Basically climate knowledge infrastructure is an excellent model to predict other types of knowledge infrastructures, especially since global data is knowledge created through an infrastructure. It is important to build a long-lasting network that produces enduring information about the world that can be used for multiple purposes such as forecasting.

As pointed out at the beginning of Edwards’ book, he identifies the separate relationships of the macro, global system and the micro, individualistic system. While events and actions may occur at the macro level, these situations have the capacity to affect the smallest ecosystem at the micro level. This holds true for decisions made at the micro level and their impact on the macro level. A similar concept called “the butterfly effect” in international relations, which is a theoretical approach that explains the seemingly small, insignificant occurrences on a small scale can eventually culminate in a series of events of large magnitude. This is only proliferated by rapid communication advancements in the globalist information society. Take for example Edward Snowden; he made a few personal decisions that produced ramifications in every corner the world. The power of inter-connectedness cannot be denied, yet is important to keep in mind that the world is also fragile. 

Thursday, June 5, 2014

The Stupidity of Computers by David Auerbach

Auerbach starts off by stating that computers are brainless high maintenance machines that require babysitting and a step-by-step tutorial to execute any task. While a computer can access a magnitude of data to satiate any user’s request, any logic or common sense is lacking especially with regards to human interaction. The breakdown occurs in the computer’s understanding of the user and its expected task. Communication issues typically stem from “ambiguity inherent in a sentence’s syntax and semantics.” (Auerbach) This adds to the pressure of having to be overly specific in details and situations, thereby leaving no other possible interpretation of human language. These inhibitions challenged programmers to refine a computer’s intelligence level by improving its linguistic capabilities.

The author then proceeds to explain the history of the search engine and its progress since the 1960s. From one system to the next, each had its own barriers. It is true that early search engines produced a plethora of results, but these were disorganized and oftentimes randomly selected links that may or may not be relative to the user’s original desired outcome. When Google came along, instead of conquering the semantic issue, a few researches decided to bypass the issue altogether by enabling computers to identify the most appropriate results by the pages to which they are linked. This set Google apart as a progressive search engine by far surpassing other search engines that had the ability to locate more relevant pages with analogous information. The problem of a computer’s understanding was not solved, but at least communication improved and keywords produced a higher chance of relevant content.

Computers generate a lot of information, but it is ultimately up to humans to organize the information in a logical, coherent way. When left up to the computer’s discretion, items such as books, articles, or web pages are oftentimes categorized incorrectly based on its own criteria. When it comes to shopping and major online companies like Amazon, again categories are preset by humans. Amazon stores information about their users and their previously purchased goods, which makes their search engine capable of predicting future inquiries. However, there are still fallacies within the system that must be facilitated and edited by humans. 

From beginning search engines to Google, search engines over time have become more and more advanced, specialized, and sophisticated. However, a computer will never completely understand people nor will it be able to take the place of human intelligence. No computer has ever been able to pass Alan Turing’s intelligence test of being able to convince an audience it is human. (See implications for updated information) Auerbach comes to the conclusion that because computers will not fully be able to fully join our world, we will have to eventually join their world. As we become more fully integrated with computers and dependent on their powers, we will find ways to conform our ways of thinking to their ways of thinking. Auerbach concludes that humans will acquire the limitations of computers, thereby “dumbing” themselves down to a level equal of the machine.


Implications & Examples

One obvious implication is Auerbach’s assumption of the limitations of the computer. If the computer has progressed exponentially has he explained in his article, how could he be so certain the computer will never be able to create ontological categories? With the virtual world being as unpredictable as it is, nothing should ever be assumed. As Auerbach took so much time to expand upon, search engines have become increasingly refined with their classification abilities at a very rapid pace. Moreover, a computer cannot possibly pick up sarcasm, political slants, irony, or other nuances unique to the human race, right? Not necessarily. An Israeli research team developed SASI, a Semi-supervised Algorithm for Sarcasm Identification, which can detect sarcastic comments online with 77 percent precision. (http://www.cs.huji.ac.il/~arir/10-sarcasmAmazonICWSM10.pdf) New algorithms surface all the time closing the gap more and more between artificial intelligence and the human brain. Pattern recognition abilities have heightened a computer’s perception of humans. According to a recent study, scientists were successful in developing a program that could detect whether a person was faking injury or in real pain. (http://www.businessinsider.com/r-if-you-want-to-fake-it-dont-do-it-around-this-computer-2014-21) Emotion detecting robots have been around for a few years now, and the software is only being further perfected. 

Oddly enough, while I was writing this blog, for the first time ever a computer passed the 65-year-old Turing Test mentioned previously. Five supercomputers entered the 2014 Turing Test, and the winner, Russian computer “Eugene Goostman”, is certainly a milestone in the world of artificial intelligence. Eugene was able to fool 33 percent of humans it was a 13-year-old boy. (The threshold for passing is 30 percent.) The consequences associated with this seeming victory are unknown at the time. The reality of a computer being able to trick someone it is human is a tool that could be used in conducting cybercrime or combatting cybercrime. The creators of Eugene plan to continually make the machine smarter.

Ultimately David Auerbach should refer to the stupidity of humanity rather than the stupidity of computers. After all, it is the human’s decision to step down and join the computer’s reality. I personally do plan on dumbing myself down to a computerized version of myself. Now we are entering a phase of human embarrassment where people are eager to live their lives digitally and define their lives based on social media. We still have (relative) control over our computer dependency; to give up that control is our dumbness—not the computer’s.


References:
http://www.cs.huji.ac.il/~arir/10-sarcasmAmazonICWSM10.pdf
http://www.businessinsider.com/r-if-you-want-to-fake-it-dont-do-it-around-this-computer-2014-21
http://www.washingtonpost.com/news/morning-mix/wp/2014/06/09/a-computer-just-passed-the-turing-test-in-landmark-trial/
http://www.nbcnews.com/tech/tech-news/turing-test-computer-program-convinces-judges-its-human-n125786