Sunday, August 3, 2014

3D Printers

"3D printing will potentially have a greater impact on the world over the next 20 years than all of the innovations from the industrial revolution combined. " 

-Benjamin Grynol, author of Disruptive manufacturing: The effects of 3D printing



Bigger than the internet?

At first glance, does this look like a real gun to you? Could you possibly be fooled of its authenticity if I did not disclose information that it is in fact a phony? If you answered "yes" to both questions, you would not be alone. Now, what if I were to tell you that this gun was the result of a 3D printer? Many people might think I was joking, especially since the revelatory world of 3D printing has not reached all audiences quite yet. My mother, for example, has never heard of the concept of 3D printing, nor does she even grasp the concept of how something like that would even operate. Nonetheless, 3D printers are taking center stage in the world of technology, and the public is beginning to notice of one of the most innovative and revolutionary pieces of machinery developed in the last decade. Some assert that 3D printing is conceivably the largest manufacturing achievement since the Industrial Revolution and has the capacity to completely overturn the manufacturing industry in a matter of years. As quoted by Financial Times, 3D printing has potential to be an even greater invention than the internet. Is that an over-exaggeration revolving around excessive hype over the 3D printing industry? Before taking a side, it is important to explore the industry further and observe what is really at stake.

A brief history

The inception of 3D printers originated in a small town called San Gabriel, California, where a engineer by the name of Charles “Chuck” Hull worked for a small company that created protective layerings for furniture using ultraviolet (UV) lights. 
It was not long before he realized that he could take these coatings and apply the concept of layering them on top of each other until they eventually formed objects. Upon his discovery, he decided to explore the possibilities of building a machine that could essentially print layers upon layers especially for the use of building prototypes to test in products. The idea was cutting edge and had the possibility of replacing the traditional processes of mold making and casting materials. Not only would this groundbreaking process reduce costs, but it would expedite the act of creating prototypes. 

Hull called his invention “stereolithography,” which is a concept where thin layers are created out of malleable materials to form 3D objects and became the very first 3D printing technology. Hull went on to patent his product and co-founded 3D Systems, which is still today one of the largest leaders in the 3D printing industry.

The 3D Systems SLA250 stereolithography unit

The mechanics

What is unique about the process is an internal examination of the object being created is the primary documentation utilized in the blueprint. Printing then expands to the outer elements, meaning the procedure is from the inside out. The skeletal elements are essential to the mechanics of the object, and each internal chamber is meticulously detailed for accuracy in measurement and function. After a blueprint is designed to the specifications of the desired object, the 3D printer interprets the downloaded design by breaking down every detail of the object and translating them into printable layers. The chosen material powder is shifted into place and when hit by the laser beam, melts and solidifies into form. The following diagram shows the process of 3D printing and the major parts of a 3D printer. 












An interested buyer has a couple options when purchasing a 3D printer: consumer units or home-built machines. With the home-built machines, do-it-yourself kits are sent to an individual where parts are assembled by the purchaser in the home. The advantages to home-built machine are they are cheaper, more easily accessible, and owners generally have more knowledge on how to fix these machines since they pieced it together initially. One of the leading 3D printer companies in the world, MakerBot, is a New York-based company that manufactures and sells affordable 3D printers. The MakerBot Digitizer priced at $799 is the least expensive route in the company’s line, while other larger models range upwards of $6,500. 

DIY Home Kit
MakerBot Replicator

















Endless possibilities

One year ago, I had the pleasure of visiting a MakerBot retail store in New York City where I observed the fascinating process of Replicator Desktop 3D Printers making new objects. The in-store demo I found the most interesting was the ability of the MakerBot Digitizer Desktop 3D Scanner to scan an object and produce a 3D model of that object. I watched in awe as a cube the size of a toaster-oven formed a lifelike heart layer by layer using a nozzle that deposited plastic material onto the realistic model in the exactness of one tenth a millimeter. (economist) Because of the exact nature of 3D printing, printing objects is not always a quick process as many can take hours to fully print.

Owners of a 3D printer searching to download quality printable models should visit www.thingiverse.com, which is an online marketplace where people can contribute their designs. If users are particularly adventurous, there are programs available to create personalized blueprints. The vast selection of these digital designs include anything from shoes to weapons to musical instruments and can be utilized with all 3D printing systems. Being a violinist myself, the idea of customizing a violin to my specifications and printing it on-demand is an entertaining thought. The possibilities are literally endless.

In 2011, the worlds first fully printed aircraft emerged due to the innovation of a few engineers at the University of Southampton. These planes are unmanned aerial vehicles (UAVs) that flew flawlessly and took a mere two days to design and five days to print. The budget for the robotic aircraft came in at just $8,439.50, insignificant pocket change compared to the costs of a comparable UAVs. Also in 2011, Jim Kor of KOR EcoLogic in Winnipeg, spearheaded efforts to produce the first fully printed car complete with a MPG of 200 on the highway and 100 in the city. Named Urbee, the vehicle is not only incredibly energy-efficient but environmentally-friendly.

3D Printed Surveillance Drone



A Moral Dilemma


Being able to print virtually any object on 3D printers raises a lot of questions and concerns that deal with the ethical implications of the technology. Because these printers can be purchased at historical low prices for individual usage, it is difficult to enforce regulations within the walls of a person’s home. This complicates the matter of ethical considerations, especially legal issues, for a number of reasons. While 3D printing has a superabundance of advantages and extensive benefits, there are downsides to each groundbreaking advancement.

3D Printed Knife
One of the most controversial aspects of 3D printing is the capacity to print weapons in the privacy of one’s home. What is even more astonishing about these weapons is that they actually function. Being able to print a lethal weapon anytime, anywhere completely circumvents firearm governances; all it takes is a 3D printer, the internet, and a digital blueprint. Criminals would have instant access to weapons without the high costs and regulations. Consequently, consumers do not register these printed handguns, nor do they comply with background checks normally conducted with the purchase of regular handguns. Even more alarming is the fact that plastic firearms do not trigger metal detectors potentially causing major security breaches. Not being able to detect the printed guns at airports or government facilities could lead to major acts of terrorism.
Blueprint Design for Handgun























Saving Lives

Over the past few years, 3D printing has emerged in the field of health, particularly in producing medical implants. In 2013, 3D printers made medical history when a 3D printed jaw was transplanted into a woman from the Netherlands. These jaws are produced layer by layer from a titanium powder substance, meaning implants are significantly less expensive than a traditional jaw implant. With these types of implant productions, they are printed to exact specifications so almost zero materials are wasted in the process. It will be interesting to watch the advancement of these 3D printed medical implants and the various types of surgeries and transplants in which they are utilized.

3D Printed Artificial Ear
Organovo, a medical research and practice company, developed a bioprinting process in 2010 that essentially creates functional human tissue by printing layer after layer of human cells. These tissues are currently being built into human organs, heart valves, knee cartilage, blood vessels, and bone implants, all used for scientific research and drug testing. Astonishingly, the company claims that its 3D printer will produce a human liver by the end of 2014. Although the liver cannot currently be used for transplants, it is a major milestone in the journey towards being able to produce clinically-approved organs on demand suitable for surgery. Waiting lists of patients in need of organ transplants would     diminish, and more lives could potentially be saved by this radical invention.

3D Printed Liver Tissue






Monday, July 14, 2014

The Shallows: What the Internet Is Doing to Our Brains

Help, I can’t focus!

Do you have trouble focusing these days on reading lengthy segments of literature? Does your mind wander and seem to flitter between words without fully digesting or retaining the consumed information? If you can relate with these cognitive obstacles on a daily basis, you are not alone in your failure to attain unadulterated concentration. In fact, Nicholas Carr, a highly regarded American author, was so disturbed by the apparent disintegration of his thought process and memory that he wrote a book called The Shallows: What the Internet is Doing to Our Brains. Throughout this New York Times Bestseller, Carr spotlights the alarming effects of the world wide web on human intelligence, the composition of the brain, and peoples’ every day lives. His explanation for why so many humans are struggling with distractedness is detailed in various historical technological advancements and their roles in changing the very neurological makeup of our intellectual capacity. However, the internet, in his opinion, is chiefly responsible for the greatest impact on the way people think and even write. Carr explores whether the internet is making our minds smarter, dumber, or just different. Finally, it is inevitable that he surveys the ethical ramifications of such a massive mechanical invention that has forced millions of people into technological slavery.

Plasticity

Before assuming such a revolutionary idea that the internet has indeed altered the physical composition of our brains, it is necessary to substantiate the underlying notion that the brain is even capable of change. After all, most neurologists and biologists for hundreds of years maintained the firm belief that the structure of the adult brain was neither malleable nor transformable. Astonishingly it was not until the 1980s the concept of the “plasticity” of the brain was acknowledged and embraced, but it has not been until recently that scientists are equipped with skills and machinery to monitor brain activity. With the conclusion that our brains are in flux throughout our lives comes increased responsibility and accountability. Our experiences and circumstances greatly impact and mold the neural circuitry, meaning we have an incredible amount of control over the way we think and process. Consequently, the way we think also has an indelible effect on the physical structure of our brains. As Carr writes, “We become, neurologically, what we think.” (Carr, p 33)

Now that Carr has established the validity of the web’s capacity to modify the structure of our brains, it is important to note why and how such change occurs. Intellectual technologies are defined as “tools we use to extend or support our mental powers” and are considered to have the greatest and most longterm impact on our thought processes. (Carr, p 44) Moreover, each intellectual technology encompasses an intellectual ethic, and that ethic is what has the most dramatic effect on humans. The original intention of technological inventions are not always the realistic outcomes that surface as byproducts. It is essential to step back and observe not only the products and results of thought but the thought itself.

According to an Israeli study designed to monitor web activity of millions of users in multiple countries, the average time spent on a web page is 19-27 seconds. (Carr, p 137) The act of “power browsing” is predominately utilized as opposed to leisurely browsing; rather than fully committing oneself to the informational text of a web page, the brain floats at the surface by skimming or reverting to graphics. This type of shallow engagement means people are constantly shifting their attention, which means the brain is constantly reorienting itself. After time, your brain becomes accustomed to the chaotic frenzy of visiting 10-15 pages in a period of five minutes and severely handicaps the art of concentration. The addition of hypertext and “hypermedia”—words linked to sounds, images, and videos—adds exponentially to the bouncing between pages. It is not necessarily a problem that people are browsing and skimming through web sites; the issue lies within the realization that skimming is becoming the most prevalent mode of reading. While one might argue that web browsing develops and promotes multitasking and enables users to access a plethora of informational data, Carr once again explains his reoccurring theme that people no longer have the ability to think or read deeply. We are training our brains to think in fragmented, scattered patterns. The tsunami of emails, social media alerts, messages, RSS feeds, news headlines, and other web based interruptions has overloaded our brains making it increasingly difficult to focus our attention.

The end of books?

Until the invention of Gutenberg’s printing press in 1450 the world was comprised of mainly oral cultures where stories, experiences, songs, and any type of worthy information were passed along from generation to generation by way of mouth. When books began to be printed and distributed after 1450, people immediately developed new mannerisms required by reading such as deep concentration, meditation, undisturbed attention, and intense mental discipline. Reading books quickly became a normal part of daily activity and has been an essential component of everyday life, that is, until the introduction of the internet. Now, the amount of reading print publications has drastically decreased due to the constant use of the web. People in their twenties spend an astonishing 19 hours per week online while adults between 18 and 55 are spending as much as 30% of their leisure time online. (Carr, p 86) As a result, attentiveness demanded by deep reading is being shattered while many people have ceased to read books completely. After all, “The intellectual environment of the internet is like reading a book while doing a crossword puzzle.” (pg 126) Now that companies, such as Google, are digitizing books and aspiring to make all printed literature available online, the literary world as we know it has forever changed. The finality and eternity of printed works are slowly being overhauled by the impermanence and ephemeral nature of electronic text.

Digital memory

With the creation and prolific distribution of printed materials came a vast supply of diverse and valuable information that the newly literate society safeguarded and committed to memory. The key to such detailed and voluminous memorization is being able to process, internalize, and digest literature attentively and with complete engagement of the brain. The brain’s capacity and retention of information depends largely on the mode by which it is processed. For example, thorough processing would yield more vivid and accurate memories while shallow and disrupted processing would yield memories of lesser quality. “With each expansion of our memory comes enlargement of our intelligence,” Carr explains. (Carr, p 192) Deriving from his statement, our intelligence depends on our cognitive load and how much of it we are capable of analyzing and absorbing deeply.

The emergence of “artificial memory” began with new media storage options such as copy machines, videotapes, audiotapes, and computer drives where information could be conveniently deposited and retrieved at pleasure. Rather than committing information to memory, humans could sift through their own experiences and filter what they desired to personally consume or outsource to an external processing device. As the internet made its debut in society, it quickly began to replace personal memory, especially the storage of information. While it afforded people the luxury of endless, bountiful information, they did not necessarily know more or increase their level of intelligence. For the first time in history, web users could utilize a search engine to instantly collect any information conceivable. Today, this has developed into an unhealthy dependency because humans are encouraged to rely on the internet rather than their own memory. The ethical and social ramifications reverberate in the deepest fibers of society, namely the sustenance of culture.

Implications

In conclusion, Carr writes a fascinating book that highlights many thought provoking concepts about the internet’s impact on human information processing. In my opinion, the overall perspective he casts regarding the web is quite negative. He does not necessarily relay verbatim that it is an evil invention, but he does broadcast its destructive powers over our society. While I do not completely disagree with him, I also recognize the enormous advancements the internet has bestowed on humans worldwide. He may not have intended to attack the internet so voraciously, but the gist that I find fault with is that the web has done more harm than good. Similarly, Carr repetitively focuses on the harmful repercussions of the internet but does not offer any remedial advice. After finishing the book, I thought to myself, “So what am I supposed to do?” If Carr had provided some strategic guidance on how to deflect the adverse effects, the book would have culminated on a more positive note. Carr confessed that many people do not retain the option to dissociate from the internet. I would like to think that those people are not doomed with brain deterioration as a result of their livelihoods’ dependence on the web, and we actually have a choice on whether we will allow ourselves to be controlled by technology.

Monday, June 23, 2014

A Vast Machine

1. The process of knowledge formation in a sociotechnical system.

When you are on a quest to find the answer to a particular question, you might find what you think suffices as an informational explanation to your inquiry. However, Edwards says the ultimate question to ask yourself repeatedly when desiring accurate knowledge is, “How do you know?” Before you know it, you are on a fact-finding mission that will hopefully lead you to the source of evidence. Moreover, inserting the five W’s (who, what, where, why, when) will refine your results further to the point of defining evidence itself! In his book, Edwards utilizes the “How do you know?” process to analyze climate change and the various assumptions regarding global warming. There are many variables associated with collecting weather data from the past and the present including differences in instruments, varying observation hours, and alternate calculations. Attempting to eliminate all the different variables is part of the knowledge purification process. Edwards describes his book as an “historical account of climate science as a global knowledge infrastructure.” (Edwards, pg 8) Many people argue that global warming is based on model predictions, and therefore void of sound evidence. Climate models are used in his book to show that you can arrive at a conclusive reality for global warming and even project future trends from simulation models. The presence of technology among intricate relationships between complex infrastructures and human behavior can be optimized through strategic organization (infrastructure). If we are to arrive at knowledge formation more concrete than probabilistic predictions, the existence of infrastructures and models is essential. After all, “Without models, there are no data.” (Edwards) On the other hand, the more data you have, the better your models.


2. The concept of "vast machine" and knowledge infrastructure. 

Edwards defines “a vast machine” as “a sociotechnical system that collects data, models physical processes, tests theories, and ultimately generates a widely shared understanding of climate and climate change.” (Edwards, pg. 8) So what is the difference between a technical system and a sociotechnical system? Mainly the foundation of a sociotechnical system is rooted in social elements, meaning networks of people, places, and things. It is within these networks that knowledge is created and shared as a communal effort.

In the book, the Large Technical Systems (LTS) model addresses the different phases in which infrastructure is formed:

1. Invention  
2. Development and innovation 
3. Technology transfer, growth, and competition 
4. Consolidation 
5. Splintering or fragmentation 
6. Decline

Some of the systems that evolve during these stages are eliminated in a process similar to survival of the fittest. Other systems that survive the stages can be linked together to serve a greater need. The most important thing to remember is that infrastructures are networks, meaning they come with their own set of management difficulties or “tensions.” (Edwards, pg 12) With regards to climate infrastructures, scientists established an international network designated to collect weather data from all parts of the world. The challenge was coordinating all the different weather data systems into a single global climate information infrastructure (or observing system).


3. The basis of scientific knowledge. 

Edwards proposes on page 16 that in order to understand knowledge, one must understand:

How data gets moved around
How they get created
How they are transformed into reliable information
How that information becomes knowledge

Producing and cultivating scientific knowledge requires not only tools for research and media to share it but enough connectedness to warrant informational legitimacy. Data itself is not necessarily the basis of scientific knowledge, just like instrument readings are not the foundation of weather forecasting. Preliminary data is definitely utilized when generating forecasts, but models (specifically computer models) are largely responsible for predictions. To accommodate for the revision of models over time, climatologists use “reanalysis,” which reconciles data taken over long periods of time. Any sort of discrepancy is discarded or adjusted according to the analysis model. Once again, infrastructure is the basis of scientific knowledge, and without it, knowledge would be unreliable and erratic.


4. The concept of globalist information.

The notion of globalist information begins with the conceptualization of earth as a permeable, mutually interdependent community comprised of a collection of information about the whole world. In this sense, the world is full of systems of conducting information around the world as well as systems generating information about the world. These network systems evolved from journalism and postal mail to global environmental monitoring. Edwards decides to concentrate on the oldest globalist information system, the weather data network. Basically climate knowledge infrastructure is an excellent model to predict other types of knowledge infrastructures, especially since global data is knowledge created through an infrastructure. It is important to build a long-lasting network that produces enduring information about the world that can be used for multiple purposes such as forecasting.

As pointed out at the beginning of Edwards’ book, he identifies the separate relationships of the macro, global system and the micro, individualistic system. While events and actions may occur at the macro level, these situations have the capacity to affect the smallest ecosystem at the micro level. This holds true for decisions made at the micro level and their impact on the macro level. A similar concept called “the butterfly effect” in international relations, which is a theoretical approach that explains the seemingly small, insignificant occurrences on a small scale can eventually culminate in a series of events of large magnitude. This is only proliferated by rapid communication advancements in the globalist information society. Take for example Edward Snowden; he made a few personal decisions that produced ramifications in every corner the world. The power of inter-connectedness cannot be denied, yet is important to keep in mind that the world is also fragile. 

Thursday, June 5, 2014

The Stupidity of Computers by David Auerbach

Auerbach starts off by stating that computers are brainless high maintenance machines that require babysitting and a step-by-step tutorial to execute any task. While a computer can access a magnitude of data to satiate any user’s request, any logic or common sense is lacking especially with regards to human interaction. The breakdown occurs in the computer’s understanding of the user and its expected task. Communication issues typically stem from “ambiguity inherent in a sentence’s syntax and semantics.” (Auerbach) This adds to the pressure of having to be overly specific in details and situations, thereby leaving no other possible interpretation of human language. These inhibitions challenged programmers to refine a computer’s intelligence level by improving its linguistic capabilities.

The author then proceeds to explain the history of the search engine and its progress since the 1960s. From one system to the next, each had its own barriers. It is true that early search engines produced a plethora of results, but these were disorganized and oftentimes randomly selected links that may or may not be relative to the user’s original desired outcome. When Google came along, instead of conquering the semantic issue, a few researches decided to bypass the issue altogether by enabling computers to identify the most appropriate results by the pages to which they are linked. This set Google apart as a progressive search engine by far surpassing other search engines that had the ability to locate more relevant pages with analogous information. The problem of a computer’s understanding was not solved, but at least communication improved and keywords produced a higher chance of relevant content.

Computers generate a lot of information, but it is ultimately up to humans to organize the information in a logical, coherent way. When left up to the computer’s discretion, items such as books, articles, or web pages are oftentimes categorized incorrectly based on its own criteria. When it comes to shopping and major online companies like Amazon, again categories are preset by humans. Amazon stores information about their users and their previously purchased goods, which makes their search engine capable of predicting future inquiries. However, there are still fallacies within the system that must be facilitated and edited by humans. 

From beginning search engines to Google, search engines over time have become more and more advanced, specialized, and sophisticated. However, a computer will never completely understand people nor will it be able to take the place of human intelligence. No computer has ever been able to pass Alan Turing’s intelligence test of being able to convince an audience it is human. (See implications for updated information) Auerbach comes to the conclusion that because computers will not fully be able to fully join our world, we will have to eventually join their world. As we become more fully integrated with computers and dependent on their powers, we will find ways to conform our ways of thinking to their ways of thinking. Auerbach concludes that humans will acquire the limitations of computers, thereby “dumbing” themselves down to a level equal of the machine.


Implications & Examples

One obvious implication is Auerbach’s assumption of the limitations of the computer. If the computer has progressed exponentially has he explained in his article, how could he be so certain the computer will never be able to create ontological categories? With the virtual world being as unpredictable as it is, nothing should ever be assumed. As Auerbach took so much time to expand upon, search engines have become increasingly refined with their classification abilities at a very rapid pace. Moreover, a computer cannot possibly pick up sarcasm, political slants, irony, or other nuances unique to the human race, right? Not necessarily. An Israeli research team developed SASI, a Semi-supervised Algorithm for Sarcasm Identification, which can detect sarcastic comments online with 77 percent precision. (http://www.cs.huji.ac.il/~arir/10-sarcasmAmazonICWSM10.pdf) New algorithms surface all the time closing the gap more and more between artificial intelligence and the human brain. Pattern recognition abilities have heightened a computer’s perception of humans. According to a recent study, scientists were successful in developing a program that could detect whether a person was faking injury or in real pain. (http://www.businessinsider.com/r-if-you-want-to-fake-it-dont-do-it-around-this-computer-2014-21) Emotion detecting robots have been around for a few years now, and the software is only being further perfected. 

Oddly enough, while I was writing this blog, for the first time ever a computer passed the 65-year-old Turing Test mentioned previously. Five supercomputers entered the 2014 Turing Test, and the winner, Russian computer “Eugene Goostman”, is certainly a milestone in the world of artificial intelligence. Eugene was able to fool 33 percent of humans it was a 13-year-old boy. (The threshold for passing is 30 percent.) The consequences associated with this seeming victory are unknown at the time. The reality of a computer being able to trick someone it is human is a tool that could be used in conducting cybercrime or combatting cybercrime. The creators of Eugene plan to continually make the machine smarter.

Ultimately David Auerbach should refer to the stupidity of humanity rather than the stupidity of computers. After all, it is the human’s decision to step down and join the computer’s reality. I personally do plan on dumbing myself down to a computerized version of myself. Now we are entering a phase of human embarrassment where people are eager to live their lives digitally and define their lives based on social media. We still have (relative) control over our computer dependency; to give up that control is our dumbness—not the computer’s.


References:
http://www.cs.huji.ac.il/~arir/10-sarcasmAmazonICWSM10.pdf
http://www.businessinsider.com/r-if-you-want-to-fake-it-dont-do-it-around-this-computer-2014-21
http://www.washingtonpost.com/news/morning-mix/wp/2014/06/09/a-computer-just-passed-the-turing-test-in-landmark-trial/
http://www.nbcnews.com/tech/tech-news/turing-test-computer-program-convinces-judges-its-human-n125786