Revolutionary technologies are often placed in the service of national power and objectives. In this seminar on Hiroshima and the making of modern America, we study the creation and use of the first atomic bomb and the ensuing nuclear-arms race between the U.S. and the Soviet Union. We will concentrate on the period from 1940 to 1963 – and also examine contemporary issues arising from nuclear weapons and efforts to limit their spread Iran, North Korea and other nations outside of the “club” of accepted nuclear nations. Through diverse perspectives, students will gain an appreciation for the technological and scientific forces that drove the U.S. to global dominance in the years following World War II. Hiroshima profoundly altered American attitudes towards science and technology, war and international security, education and invention. The bomb also raised questions about faith in technological progress, forcing some Americans to re-evaluate their relationship to complex systems on which their lives depended. The movement to limit the testing of nuclear weapons – and control their spread to other nations – spawned the first genuinely global movement to control science and technology. At home, anxieties about nuclear weapons energized an American public more accustomed to trusting experts and government decrees into a more activist, critically-engaged civil society, and contributed to the rise of feminism, environmentalism, and movements around “appropriate” and “responsible” innovation. In the Cuban Missile Crisis in October 1962, these anxieties reached a crescendo, bringing the U.S. and Soviet Union to the brink of nuclear war. At the same time, the “Manhattan Project,” came to embody the view that given enough resources, technocrats could achieve virtually any end. In short, Hiroshima reveals much about not only the making of modern America but also the contemporary world.
History of the broad paths towards enhanced consciousness, or improved thought. The class emphasizes the contemporary pursuit of cognitive enhancement (CE) and the construction of world brains or a global super-intelligence. Individual and collective efforts to discipline the workings of the mind emerged as early as the Ancient Greeks, and evolved through the Enlightenment and the Modern era. Over the past 50 years, two important paths, or trajectories, for enhancing human consciousness have come into clearer focus: One path is driven by bio-pharmaceutical technologies; the other path is driven by digital electronics, computing and the Internet. All three approaches – we can think of them as “mental discipline,” “the pill,” and “the processor” – coexist and co-evolve. The class will explore the differences and similarities in approaches to enhanced consciousness in the past and into the future. Today the bio-pharma and digital computing approaches are so robust that improvements to consciousness and cognitive performance are viewed as an everyday affair and the potential for perfect consciousness and even an immortal or everlasting individual or global mind is considered by some to be within reach. Through readings, written reflections and a team project, students will confront urgent questions around the history and future of consciousness, including: Why have humans pursued improved means of thinking for centuries? What’s new and challenging about today’s approaches to cognitive enhancement? What are the dangers of new and improved consciousness, what are the potential benefits, and how can we learn from the past as well as imagine and design more appropriate paths to enhanced consciousness in future? What ways might inequality widen by commercial products, either digital or pharmaceutical, that enhance the quality and quantity of thought? Finally, how do we as individuals address these issues in our own lives?
Failures in the story of computing offer important lessons about how innovation works, and why certain technically sound approaches failed to gain acceptance. For instance, Isaacson makes much of Gates’s decision to provide basic software for the first personal computer offered by IBM in 1981. Then a hegemonic giant in the computer world, IBM anointed Gates as king of PC software. That story is supported by Gates’s retrospective recollection that, at the time, he realized that one operating system, most likely the one chosen by IBM, would become the standard. Yet even if this account is correct, many observers fail to mention that IBM made the decision, not to assist Gates in his ascent, but because the company wanted to maintain the robust market and future for its more powerful mini- and mainframe computers, whose software at the time was powerful and proprietary to the company. Rather than depict IBM’s decision as inevitable, arising from a set of constrained circumstances (the company needed a PC OS and didn’t want to take the time to create one for what its engineers considered to be a toy computer that would perhaps quickly be superseded), one might widen the hsitorical frame by describing the outlook and mentality of IBM’s computer experts at the time. He would have then learned that IBM had sound engineering reasons for segregating the emerging PC from its then hegemonic computing machines that were essential to running large organizations. By missing the contingency at play in IBM’s world circa 1980, one might present as neat and logical an inflection point in the historical road that was actually messy, provisional and highly random.
The same kind of teleological thinking infects Isaacson’s repeated attempts to establish that the shape of the digital revolution – and especially the links between the PC as a tool for individual creativity and the rise of the Internet as a means for creative individuals to join forces in interesting ways -- owes much to the California counter-culture of the 1960s and 1970s. That proposition is largely correct, as authors John Markoff (author of What the Dormouse Said: How the Sixties Counterculture Shaped the Personal Computer Industry, 2005) and Fred Turner (author of From Counterculture to Cyberculture, 2006) showed a full decade before Isaacson turned up. But the strong links between the counterculture and PC pioneers doesn’t justify Isaacson’s further leap -- of describing the resulting digital revolution as chiefly motivated and propelled by in the goal of shifting power over information from elites to the masses. The populist desire to spread the power of computer as widely as possible indeed reflected democratic impulses long at play in American society. But by ignoring the countervailing force unleashed by the digital revolution – that information became radically re-centralized by the Internet, thus enabling governments to more easily surveil and track dissenters and loyal citizen alike – Isaacson is unable to explain how what he views as digital tools of democratic empowerment have become, in the hands of authoritarian agents such as the National Security Agency and the government of China, modalites of surveillance and repression. Once more, Isaacson distorts the history of computing in disturbing ways, by focusing on a narrow set of business winners when the effects of the spread of computing on society are much more complicated. While the specter of an all-knowing and all-powerful digital Panopticon overlooks the nuances and complexities of a networked world, the idea that corporate-mediated and –managed computing platforms are sure pathways for individual and collective liberation is no less dangerous an illusion.