Or How We Might Live Without Books
by John Lobell
The following is excerpted from my book, “Visionary Creativity,” which is looking for a publisher.
What do Michelangelo Buonarroti and Mark Zuckerberg have in common? They are both Visionary Creatives. They accomplished not just mastery, not just innovation, not just creativity, but Visionary Creativity. The work of the Visionary Creative is embedded in its culture, and, in a circular process, that work is instrumental in the destruction and recreation of its world.
We can imagine the morning of September 8 in 1504 when Michelangelo’s sculpture of David, on which he had worked in secret for three years, was drawn from his studio into Florence’s Piazza della Signoria. There must have been a shock and then a realization, “Yes, that’s it! That’s what I have been trying to imagine but did not until now have the imagery.” David, a symbol of Florentine independence, is, of course, an Old Testament figure, but the sculpture is in the style of ancient Greek sculpture, thus bringing the Biblical and Hellenic traditions together. David was simultaneously the embodiment of Renaissance humanism with its focus on the human, and a stretching of the idea. In simplistic terms, we might say that there is spirit, the human, and nature. The Biblical traditions hold that spirit is the central and highest part of this trio and Greek and Renaissance humanist traditions hold that the human is the central and highest.
David was seventeen feet tall, truly monumental. Before the Renaissance, the most important component of one’s Self was one’s eternal soul that temporarily resided in an ephemeral body. By “Self,” we mean our notion of who we are in a deep sense. The humanists, including Michelangelo and his fellow artists, experienced something new, seeing the Self as body, mind, and soul. The magnificence of David’s anatomy celebrated the body, connecting the Florentines to the ancient Greeks through its similarity to Greek sculptures, while his piercing eyes are windows into a private mind at work, a psychology. Humanism, born of the books made possible by the new print technology, focused its understanding on both who we are as physical beings and on the private mental processes inside of our heads. Still, David was a stretch for the Florentines of the day. It was that stretch, a discontinuity, the difference between what the Florentines had been anticipating and what was presented to them, that helped destroy the medieval world in which the body was only a corrupt vessel for the soul, and crystallize the new humanist vision that some anachronistically still hold today. As we will see, we now live in a very different world.
The growth of Mark Zuckerberg’s online social network, Facebook, has been so rapid and so well documented that it has led to the Silicon Valley garage being challenged by the Harvard dorm room as an icon for an innovation incubator. On February 4 in 2004, Zuckerberg, then a Harvard sophomore, pressed the enter key on his laptop computer and launched Thefacebook, later to become just Facebook. Facebook allows users to “link” to and communicate with “friends,” and to post profiles, personal updates, photographs, preferences, and other material for their friends to see. While several web sites already had many of those features, Facebook brought them together in a compelling way and quickly grew to become one of the world’s most valuable companies. Stories of Facebook typically focus on the personalities of its founders, its rapid growth, and its record-setting IPO (initial public stock offering), missing its significance, which is the facilitating of our migration from inside our skins out to the electronic cloud, thus destroying the individual psychological Self of Michelangelo’s humanist world and opening us up to a new and still unfolding world.
We are more than our bodies, minds, and souls. We are also our roles, relationships, friends, papers, photos, memories, etc. Our identities began migrating outside of our skins as soon as we started making art, and the pace of that migration increased with writing and then again with printing. But the pace greatly accelerated in the late nineteenth century as we began to weave an electric net around our planet, and exploded with the Internet as we deposited vast parts of ourselves—our records, images, memories—in networked server farms around the world, known as the cloud. Zuckerberg’s Facebook both adopted and extended our putting more and more of ourselves into the cloud, facilitating the sharing of this material, dissolving the private humanist vision created by the printed book and crystallized by Michelangelo, and creating a new vision. This destruction of the old and creation of the new is the role of the Visionary Creative, and the reason the Visionary Creative is both celebrated and feared. The vociferous objections to the Internet in general and Facebook in particular regarding privacy are actually reactions to the ongoing destruction of the private psychological Self that had been a function of the previous culture. Such changes are always threatening.
So what do Michelangelo and Zuckerberg have in common? Both experienced their cultures coursing through them, and both were motivated to make apparent to others what was obvious to them: that the world was no longer what it had been. The Visionary Creative is someone who manifests the spirit of the age in his work and at the same time propels that spirit forward into a continually unfolding future, destroying old worlds and building new ones. And now Visionary Creatives are building our twenty-first century, the most radically different period the human race has ever experienced.
We Live in Cultures
We store much of ourselves in our culture, particularly in our arts. Recall the end of the 1982 science fiction movie Blade Runner, directed by Ridley Scott, staring Harrison Ford, and based on Philip K. Dick’s novel, Do Androids Dream of Electric Sheep? Androids are created to work off-planet, are forbidden to come to earth, and are programmed to die in full health. Those who do come to earth are hunted by special police. The android Roy Batty, played by Rutger Hauer, as he is dying, says, “I’ve seen things you people wouldn’t believe. Attack ships on fire off the shoulder of Orion. I’ve watched C-beams glitter in the dark near the Tannhauser Gate. All those moments will be lost in time, like tears in the rain. Time to die.” He releases a white dove that carries away his experiences as he dies. The androids have no cultural forms, no art or literature, in which to store their memories.
Again, Visionary Creatives swim in the culture of their day and manifest in their work the spirit of the age. The things they create—in art, design, science, technology, business—embody that spirit, and at the same time are a little off center for us, somehow not what we expected, presenting a discontinuity that stretches us, restructures our consciousness, pulling us into the future.
Our Emerging Twenty-First Century
What will be the new creativity? We cannot say, but we can look at the stage on which the new creativity will unfold.
First, we should say that in many ways we are still on the twentieth century stage with no fixed frames of reference. Mathematics still has no foundation, physics no framework of space or time, and art no fixed observer. And our mythic archetypes still reside within the unconscious of each of us. But some of the features of the twentieth century have deepened, and some new elements have been added. Let’s briefly look at a few of the features of our new century. (Note: the cultural features of the twentieth and twenty-first centuries are addressed in several chapters in the book.)
We began this book comparing Michelangelo’s David with Mark Zuckerberg’s Facebook, looking at how they represented two very different notions of who we are and how we fit into the world. As we have been discussing throughout this book, Visionary Creativity takes place in cultural context, and in a circular manner, also creates its culture anew.
What do people do on Facebook and other social media? They upload details of their lives: biographical profiles, minute-by-minute updates on what they are doing, photographs, information about relationships, and references to things they like—for their friends to see, respond to, and add their own links to.
Before we go further, if you are not familiar with all that Facebook has to offer, you might want to have a younger person show you its features. While he is at it, ask him to show you Twitter, which allows webs of contacts between people, and BitTorrent, a service that lets you download pirated movies. (If you are reading this book a decade or so after this writing, you might have an older person come over to explain to you what Facebook, Twitter, and BitTorrent were.) How does BitTorrent work? You might imagine that someone buys a DVD of a movie, obtains one distributed to the industry if it has not yet been released, or surreptitiously videos it in a theater, and then puts a digital file of it on a web site from which others can download it. You would be right about the first part, but not the second. One of the ways the Internet works is to break information into small “packets,” label the packets with instructions on how to reassemble them to recreate the original file, and transmit these packets through different channels depending on traffic, letting the destination computer put them back together. BitTorrent goes one further. When you ask to download a movie, it finds copies of it on thousands of computers whose owners have joined BitTorrent, and sends you pieces from many of them for your computer to reassemble. Doing it that way makes the download much faster—the packets come to you in a torrent of bits. And at the same time you are downloading one movie, you might be contributing pieces of another movie already on your computer to someone else. (Of course the owners of movies, music, etc. are not happy about this piracy of their material.) Thus when thinking about Facebook, Twitter, BitTorrent, and other online media, we might think of the Internet as one huge cluster thing.
BitTorrent doesn’t just change how we download things, it completely changes what a “thing,” in this case a digital file, is. Likewise, social media don’t just change the way people interact, they change what social interaction is, and since in some ways we are social beings, they change what we are. Researchers are now observing in social networks patters of human behavior that had previously not been visible. Thomas Malone, director of the Center for Collective Intelligence at the Massachusetts Institute of Technology, states, “This is a significant step forward. We have vastly more detailed and richer kinds of data available as well as predictive algorithms to use, and that makes possible a kind of prediction that would have never been possible.
In 1945, Vannevar Bush, a major figure in modern technology and public policy, including being an organizer of the Manhattan Project, published an article proposing the memex, a system that would allow people to store and retrieve all of their information. Later, in the 1970s and 80s we saw the development of hyperlinking—the ability to jump from a place on a computer file to a place on another file, including Apple’s 1987 program, HyperCard. In 1989, Tim Berners-Lee proposed what we now know as the World Wide Web, and in 1992 Marc Andreessen created Mosaic, the ancestor of today’s Web browsers. The Internet had been launched in 1969 as a means of transmitting information without central hubs that might be vulnerable to disruption during a war, and since the late 1970s systems called bulletin boards were doing much of what the Web was to codify. What Lee did with his World Wide Web was to establish a set of standards to allow any computer to communicate with any other computer, allow formatted graphics to look pretty much the same on all computers, and allow parts of any document to hyperlink to any other document even if they are on different computers. What we call “the Internet” or “the web” today is a combination of the Internet and the World Wide Web.
In 1996, Larry Page and Sergey Brin began Google as a research project on Internet search at Stanford University in California. At the time most search engines ranked results by how many times the word you were searching for appeared on the page. If you were searching for shoes, for example, you would get pages that put the word shoes a thousand times at the bottom. Page and Brin developed an approach (sometimes referred to as their secret sauce, or secret algorithm) that ranks results based on relationships between websites, including but not limited to how many other pages link to the page under evaluation, and how many links are made to those pages. What began at Google as a new way to search the web became a mission “to organize the world’s information and make it universally accessible and useful.” Which could have ominous implications of the kind we discussed in the chapter on creativity and destruction.
Notice something here. The previous method of Internet search was to look at characteristics of web pages to find those you might be looking for. Google’s approach sees the Internet as webs of interrelationships. What it presents to you is a function of those interrelationships. It’s all one huge cluster thing. And it is important to understand that Internet search is still in its infancy. Stephen Wolfram has created a search service called Wolfram Alpha that can compute relationships between entities being searched, so for example, one might ask, “What is the population of the United States as a percent of the population of France.” The search engine has to find the underlying information, and then do computations to come to an answer.
Today we use the Internet for everything from social networking to research to photo storage to shopping to banking to watching videos to resetting the thermostat at our weekend house, and forget how recent and revolutionary it is. Put simply, it reorganizes everything about information and we are impatient that we cannot yet apply it to the material world to link to our car keys, and people are working on that as we are becoming able to put miniature sensors into just about anything. But aside from its pervasive convenience, the Internet creates a world of relationships rather than a world of space and time.
There are branches of mathematics that work independently of dimension, such as Boolean algebra, and instead investigate relationships. Thus in Boolean algebra’s Venn diagrams, objects are distinct, over lapping, or one containing another, independent of their shape or size. Dimension and duration are not factors; all is relationship. Much like our emerging networked world, and ourselves as that world changes us.
Facebook, BitTorrent, Twitter, Wikipedia, etc. are parts of what we currently call “the cloud.” Cloud computing refers to computing and the storage of data not on your computer, but on servers your computer or mobile device connects to through the Internet. One advantage of this approach is that you are no longer tied to one device the way you were once tied to the location of your library or your desk or your computer. Once you place your material in the cloud, you can access it from any suitable device anywhere, so your relationship to “space” is entirely changed. (Just be sure to remember your password.) But there is more. If, for example, you use word processing software residing on your computer, you work in isolation. If you use word processing software based in the cloud, you can work in constant collaboration with various groups. There becomes the potential to completely change the nature of things we do as they become no longer private, but collaborative. There are obvious advantages and disadvantages to the cloud approach, but for better or worse, it will change everything.
Recall that at the beginning of this essay we wrote, we are also our roles, relationships, friends, papers, photos, memories, etc. The importance of these things is reflected in the Fourth Amendment in our Bill of Rights which states that “The right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures, shall not be violated…” The Fourth Amendment is interesting in that it indicates the importance assigned to our papers, but also in that at the time it was written it assumes that for the most part we will store them in our houses. Now we might store information that had been in our papers in the cloud. Cloud can also do our computing, so that anyone can have the computing power of Google or Amazon or Microsoft and rent it by the hour. And cloud computing means that what we have called “privacy,” the right to be secure we demanded for our “persons, houses, papers, and effects” during the print era, will lose its meaning. Privacy dissolves.
You might initially think of Facebook as a tool for kids to coordinate their partying and keeping tabs on who is dating whom. You would initially have been right. But that is just the beginning. You can put your entire life history on Facebook; relationships, pictures, news, etc., as well as everything you are doing at the moment—what you had for dinner, what book you are reading and what page you are on, what song you are listening to. All of this can constitute recommendations for your friends, but suppose a handful of your friends are listening to the same song at the same time. This can now be broadcast in real time as news to all of your friends. News becomes immediate and hyper local, and you, your friends, the material you link to, and your activities become the nodes of totally new kinds of interrelated networks.
You might object that the agglomeration of this information is not news, and for now you would be right, news has been what an editor filters and interprets and says is news. But could not news be the sum total of what is happening? In the near future, news may be understood very differently. While many older people are concerned about who has access to their “data,” fearing, for example, that an online bookseller might know that they purchased a book about “a” and a book about “b,” and might send them an email offering them a book about “a and b,” many younger people are spending inordinate hours every day uploading every detail of their lives into social media Web sites.
McLuhan said the world was becoming a global village, but not the kind of village we have been familiar with. This is where Facebook starts to scare some, mostly older, people. Others, mostly younger, can’t imagine what the problem is. As the online gadget review Web site, Gizmodo, said, “… your entire existence, Facebook-ified. It’s terrifyingly amazing.”
Imagine each of Facebook’s billion plus users as a point. Now imagine lines connecting each of the points that are friends. Further, imagine lines connecting all of the photos, events, and other items on Facebook to which there are interdependent links. If you are imagining this on a two-dimensional surface, you are going to have a lot of crossing lines. Maybe you could place your points in three-dimensional space and then link them. But that is not how computer scientists think. They can place their matrix in a space of as many dimensions as they need. And it is a space without extension, either of distance or time, but rather a parametric space in which the coordinates of a given point are not on a pre-existing grid (the outdated Newtonian stage), but rather are defined by each other; as one value changes, the coordinates of another changes at the same time, not unlike what we saw with CATIA earlier in this book. All is interrelationship. All is Indra’s net, as we become a part of one huge cluster thing. What kind of stage, what kind of world are we dealing with here?
[The book looks at five areas in science today and asks about they say about the stage on which Visionary Creativity is taking place in our twenty-first century. Entanglement, in which sub atomic particles can influence each other instantly over great distances. Parallel universes, which suggest that ours is only one of an infinity of universes. Evolution, which may be a consequence of networks of DNA as much as a consequence of natural selection. Genomics, in which simple rules can build complex things. And information, in which we see more and more of our once solid world dissolve into energy, and then information. Below is the section on evolution.]
Evolution is a fact; there are creatures on the Earth today that are descended from creatures of the past that were very different. How does evolution come about? Neo-Darwinism, which is Darwin’s theory of natural selection combined with Mendel’s genetics and updated with today’s understanding of random mutations and DNA, says that mutations continually occur. In subsequent generations those mutations that are beneficial are selected to be passed on to the next generation, while those that are detrimental die off. It is the slow accumulation of these mutations, usually in populations that become geographically separated, that leads to the emergence of new species. However, the sudden appearance of many species and the lack of evidence of substantial gradual change within many species, from their first appearance until their extinction, was always a problem. Darwin noticed it and said the thinness of the fossil record was the reason. A hundred and fifty years later the fossil record is not much better.
In fact, the fossil record shows that the emergence of new species does not take place gradually, but rapidly, at least in terms of geological time. In 1972, Niles Eldredge and the late Stephen Jay Gould put forward the theory of punctuated equilibrium, stating that most species remain stable for long periods of time, and then undergo rapid short bursts of evolutionary change. Eldredge and Gould otherwise still held to the neo-Darwinian view. While continual modification within species has been shown again and again, both in the fossil record and in experiment, the jump to new, very different species has presented more of a problem, and while punctuated equilibrium describes rapid change, it does not provide an explanation for it. Evolutionary biologists don’t like to talk about this, because it leaves them open to attack by creationists.
Mitochondria, the units in cells that generate energy, and chloroplasts, the units in plant cells that engage in photosynthesis, both contain their own DNA independent of the cell’s other DNA. How did they come about, and why do they have their own DNA? In 1967 the late biologist Lynn Margulis, then married to the astronomer and science popularizer, Carl Sagan, published a paper suggesting that both mitochondria and chloroplasts evolved from bacteria that merged with other cells. At first controversial, this idea is now widely accepted.
But Margulis, whose specialty is bacteria, eventually went much further, putting forward the theory of symbiogenesis. She observes that bacteria are very facile at exchanging DNA, even between different species, and that viruses reproduce by injecting their DNA into host cells. She contends that the wholesale movement of DNA from one species to another, even from one phyla or kingdom to another, often driven by bacteria and viruses, is the driving force in evolution. Symbiogenesis provides an explanation for the leaps in evolutionary change that are in fact observed in the fossil record that are not adequately explained even by punctuated equilibrium. The theory received support from the discovery of substantial amount of bacteria and virus DNA in humans when the human genome was sequenced. As more and more species get sequenced, we find more and more of everybody’s DNA in just about everybody else. cluster thing.
In an interview, Margulis stated:
“All visible organisms are products of symbiogenesis, without exception. The bacteria are the unit. The way I think about the whole world is that it’s like a pointillist painting. You get far away and it looks like Seurat’s famous painting of people in the park. Look closely: The points are living bodies—different distributions of bacteria…. Symbiogenesis recognizes that every visible life-form is a combination or community of bacteria.”
James Cameron’s 2009 science fiction movie Avatar presents all of life on the distant Moon Pandora as one continuous and conscious organism. The idea is widespread these days and is often associated with James Lovelock’s Gaia hypothesis, the notion that the Earth’s entire biosphere can be thought of as one self-regulating organism, but it might have also come from Lynn Margulis.
The symbiogenesis theory does not supplant neo-Darwinism, but supplements it. However, despite the evidence for it, it does not have a lot of support; it is too threatening, and it requires expertise in fields unfamiliar to the current generation of evolutionists. Evolutionary biologists have the luxury of ignoring our interactions with the DNA in microbes, since their work is ideological and has no immediate practical consequences. Not so with those studying cancer. Our understanding of cancer is now focused on DNA that turns on and off various genes, thus causing uncontrolled growth. Over ninety percent of the DNA in human beings is in the microbes floating around inside us, and we now realize that this microbial DNA interacts with our own. While we currently understand very little about these interactions and their consequences, they do appear to be important in the development of many cancers. We are approaching one of those paradigm shifts that requires those established in a field, in this case evolutionary biology, to retire before the shift can be completed. Margulis’ response to the resistance to her ideas? She states that Neo-Darwinism, excessively focused on competition, will ultimately be judged as a “minor twentieth-century religious sect within the sprawling religious persuasion of Anglo-Saxon Biology.”
While neo-Darwinism emphasizes competition between discrete organisms as the primary driving force of evolution, symbiogenesis sees all of life as a complex web of interpenetrating genomes and it proposes cooperative, symbiotic, and parasitic relationships as equally important to competition in evolution. According to Margulis and her son Dorion Sagan, “Life did not take over the globe by combat, but by networking.” Recall that Yahoo’s search technology depended on characteristics of a Web site, while Google’s search technology depends on the relationship of a Web site to the rest of the Web. In like manner, Neo-Darwinism is a nineteenth century mechanistic notion that sees discrete entities. Symbiogenesis is a twenty-first century notion that sees relationships. It will be interesting to see if we will have to wait for a substantial penetration of evolutionary biology by Asians, who tend to see the whole rather the figure-ground seen by Westerners, before there is a paradigm shift.
Paradigms and Metaphors
Let’s ask, why are we looking at science in a book on creativity? Recall that we are saying that science is not an ever more accurate picture of an objective reality, but rather is culturally dependant. This notion was long held in European continental philosophy, as opposed to Anglo-American analytical philosophy, which generally holds that there is a knowable objective world out there. It is now being embraced not only in the philosophy of science, as we see in the broad acceptance of Thomas Kuhn’s “paradigms,” and in the work of post-modern academics who see scientific theories as socially constructed, but also among leading scientists. In their book, The Grand Design, Steven Hawking and his coauthor, Leonard Mlodinow, use the term “model-dependent reality” for what we have been calling paradigms. They write:
“Strict realists often argue that the proof that scientific theories represent reality lies in their success. But different theories can successfully describe the same phenomenon through disparate conceptual frameworks. In fact, many scientific theories that had proven successful were later replaced by other, equally successful theories based on wholly new concepts of reality….
According to model-dependant realism it is pointless to ask whether a model is real, only whether it agrees with observations… One can use whichever model is more convenient in the situation under consideration.”
Kuhn uses the term paradigms. Hawking and Mlodinow use the term models. Let’s see if the term metaphor will also work. If paradigms are not descriptions of absolute reality, which we are beginning to suspect cannot be described, what are they? When you cannot describe something directly, what do you do? You use a metaphor. A successful metaphor captures the essence of something and communicates it to a part of the mind that works outside of rational thought. Many scientists find anything outside of rational thought disturbing; these same scientists often do not relate to art. Some people experience the world rationally, others experience it metaphorically. A few, including revolutionary scientists like Newton and Einstein, can experience it both ways.
It is easy to see how we might refer to Michelangelo’s David as metaphorical. It represents—or rather gives us a direct experience of—the humanist notion of what a person can be. Can we say the same about a scientific theory? At first it would seem that the answer should be no. Art transmits to us a realization on the part of the artist. Science, we are usually told, represents an objective understanding of nature. Yes, this approach holds, our scientific understandings are continually evolving, but they are guided by principles of objectivity and rationality, and they constantly come closer to correct pictures.
Recall Newton’s assumptions of uniform space and time and Einstein’s theories of a continually morphing space-time. And we have the notion of the English physicist and astronomer James Jeans that “the Universe begins to look more like a great thought than a great machine.” In the terms we are using in this book, we might call both of these metaphors—descriptions that capture and communicate to us an experience of something that we can never fully comprehended. It is the sense in which both art and science are metaphorical that they are linked.
Books and Facebook
At the opening of this essay we contrasted Zuckerberg with Michelangelo. Now let’s contrast Zuckerberg with Edward Gibbon, the English author of The History of the Decline and Fall of the Roman Empire. Starting in 1776, Gibbon took twelve years to complete his six-volume work covering the period from 180 CE to 1453 CE, and focusing on the behavior and decisions of the Romans that led to the decay and eventual fall of their empire. Gibbon’s study, using primary sources wherever possible, is the first history of the Roman Empire, is still referred to today, is a model of scholarship, and brings us the quip from the Duke of Gloucester, “What! Another of those damned fat, square, thick books! Always scribble, scribble, scribble, eh, Mr. Gibbon.” And all done without Wikipedia. Or a word processor. Or ever a typewriter.
The columnist and novelist, Anna Quindlen, wrote in How Reading Changed My Life:
“In books I have traveled, not only to other worlds, but into my own. I learned who I was and who I wanted to be, what I might aspire to, and what I might dare to dream about my world and myself…. There was waking, and there was sleeping. And then there were books, a kind of parallel universe in which anything might happen and frequently did, a universe in which I might be a newcomer but was never really a stranger. My real, true world. My perfect island.”
How many young people, we might wonder, feel this way today about books?
A book—a real book, not a contrived book for people to buy as a Christmas gift—can take years to write. Books represent a way of knowing and existing: A person with a point of view is interested in something and wishes to understand it more deeply. From his own point of view, he researches it, thinks about it, and comes to conclusions. He then presents his findings in a book, a medium that communicates with other persons who invest the time to read it, to follow the presentation and argument, and reach or not reach the same conclusions from their own points of view. All of which is dependant on the existence of literate individual persons capable of knowledge, insights, emotions, and wisdom, with points of view. McLuhan and Fiore write:
Like easel painting, the printed book added much to the new cult of individualism. The private, fixed point of view became possible and literacy conferred the power of detachment, non-involvement.
As literacy is replaced by electronacy, “individual,” “person” and “point of view” as we have known them become replaced by something new. Those attached to literacy cannot see the new and therefore see only disillusion. Reviewing Russell Banks’ Lost Memory of Skin, a novel she describes as canonical of our time, the New York Times book critic, Janet Maslin, writes:
“It tells of a plugged-in, tuned-out Internet culture “lost in the misty zone between reality and imagery, no longer able to tell the difference.” And it explores the terrible, dehumanizing consequences of choosing to live this way…. This book expresses the conviction that we live in perilous, creepy times. We toy recklessly with brand-new capacities for ruination. We bring the most human impulses to the least human means of expressing them, and we may not see the damage we do until it becomes irrevocable.”
In an article in the New York Times titled “Growing Up Digital, Wired for Distraction”, Matt Richtel writes:
“By all rights, Vishal, a bright 17-year-old, should already have finished the book, Kurt Vonnegut’s “Cat’s Cradle,” his summer reading assignment. But he has managed 43 pages in two months.
He typically favors Facebook, YouTube and making digital videos.”
And Matt Richtel writes that Allison Miller provides Vishal with serious competition:
“Allison Miller, 14, sends and receives 27,000 texts in a month, her fingers clicking at a blistering pace as she carries on as many as seven text conversations at a time. She texts between classes, at the moment soccer practice ends, while being driven to and from school and, often, while studying.”
We could lament the activities of Vishal and Allison Miller, just as Socrates lamented what he saw as the destructive effects of reading on the youth of his day, who, he said, would lose the ability to memorize. Or we could celebrate their entry into new words of interactive discourse, and the new visual thinking their video making is fostering, a kind of thinking those from a literate past will never fully understand.
For Maslin these are creepy times. For Richtel, 27,000 texts in a month are just too many. But those who live in a world of “electronacy” cannot for the life of them see the problem. Today, some young children, when given a magazine, will run their fingers over the pictures and wonder why they don’t move the way they do on a tablet computer. Many young people today may not share Anna Quindlen’s experience of books, but have visual experiences far richer than the visual experiences of their literate forebears. Look at the Lord of the Rings movies. The point here is not the hobbits and wizards, the elves and dwarves, the orcs and battle mastodons. These can be imagined by the reader of the Lord of the Rings books. But the movies present us with new points of view from ever moving cameras that swoop through fantastical landscapes among streaming armies and around embattled characters presenting us with a space never envisioned by Newton or the artists of the Renaissance who worked in perspective. The verbal richness that literate generations swam through in print is now replaced by a visual richness never before experienced, a richness we do not yet even know how to describe.
In 1964, Marshal McLuhan published Understanding Media, looking at the effects of electronic media on consciousness and culture. The exuberant tone of the book played no small role in forming the character of the 1960s, it introduced the term, “the global village,” and it prophetically described today’s Internet connected world.
“After three thousand years of explosion, by means of fragmentary and mechanical technologies, the Western world is imploding. During the mechanical ages we had extended our bodies in space. Today, after more than a century of electronic technology, we have extended our central nervous system itself in a global embrace, abolishing both space and time as far as our planet is concerned… Rapidly we approach the final phase of the extensions of man—the technological simulation of consciousness, when the creative process of knowing will be collectively and corporately extended to the whole of human society, much as we have already extended our senses and our nerves by the various media.”
Although he wrote this a dozen years before the first home computers and a quarter of a century before the Web, McLuhan is describing the effects on consciousness and culture of the Internet and of social networking long before they had been realized. Now they are being realized, and as we mentioned at the beginning of this book, we are migrating from inside our skins out to the electronic cloud, destroying the literate, humanist world and opening a new one. McLuhan is describing nothing less than the end of the private Self. Chilling. And as we mentioned, the objections to Facebook, Google, and other Internet companies regarding privacy are actually reactions to the ongoing destruction of this private Self that had been a function of the previous culture built on the book. We are experiencing a change from a private individual who is a subject of the state, to a decentered networked person, electronically extended around and off of the globe, who slips the bonds of the state. The state doesn’t like it. Seventeen years before Zuckerberg was born, McLuhan and Fiore wrote:
The older, traditional ideas of private, isolated thoughts and actions—the patterns of mechanistic technologies—are very seriously threatened by new methods of instantaneous electric information retrieval… that one big gossip column that is unforgiving, unforgetful and from which there is no redemption, no erasure of earlier “mistakes.”
What will our new culture be, what will the new person be, what will the new state be, and what kinds of cultural forms will both embody our new culture and project us forward? We cannot know what our new culture will be anymore than we can know what the next art movement, the next scientific paradigm, or the next technology will be, but we can describe the stage on which they will all unfold.
The twentieth century was one adrift, searching for new ways of orientation without fixed frames of reference. And our twenty-first century? A vastness of universes, all linked into entangled interrelationships, generated from simple genomic rules, residing in our own unconscious and at the same time generated by our consciousness, and dissolving before our eyes from matter to energy to information and finally into creativity.
We read in Joseph Campbell’s Flight of the Wild Gander about the movement of our culture from one defined, bounded, and secured by tradition, to one that is free:
“Within the time of our lives, it is highly improbable that any solid rock will be found to which Prometheus can again be durably shackled…. The creative researches and wonderful daring of our scientists today partake far more of the lion spirit of shamanism than of the piety of priest and peasant. They have shed all fear of the bounding serpent king.”