In the Name of Innovation: How a Contemporary, Multi-tiered Reform Movement Came to Focus on Remaking the City

This is a talk I gave at a meeting of InnoAnon, a 12-Step based group for innovation-speakers in recovery, which was held at the professional society 4S. Eventually, I hope to use this talk as the basis of an essay for the journal SSS, but I won't be able to get to that for some time.

Screen Shot 2017-10-17 at 9.23.45 AM.png

Hi, my name is Lee, and I’m an innovation-speaker in recovery. My innovation-speak of choice is neo-Schumpterian economics.

I'd like to thank Andrew for organizing this InnoAnon meeting and for asking me to share my story, and I'd like to thank Seyram and Holly for being so supportive and bravely agreeing to share their stories too. It feels really good to be here. (chokes up a bit)

Like the rest of you, I keep asking myself, why are we constantly surrounded with innovation-speak even though left-wing, socialist publications like the Wall Street Journal long ago realized that innovation had become an empty buzzword?

Screen Shot 2017-10-17 at 9.26.57 AM.png

And like many of you, I believe that this jargon remains durable and prevalent primarily because it serves a multitude of interests: from tech executives looking to hype their products to investors; to university presidents hoping to boost federal research dollars; from anxious parents praying their children will have a place in the future economy; to political parties who have no vision for how to improve our current situation beyond the Hail Mary throw of economic growth. In this way, innovation is an ordering-concept that particular interests have used to try to transform and reform nearly every level of society.

This is not the first time that a single concept has held such power. As historians have examined, during the Progressive Era, individuals in the United States and other industrial nations sought to remake nearly every institution through the notion of efficiency: this is true of important high-level federal and industrial initiatives as Secretary of Commerce Herbert Hoover’s 1921 report, Waste in Industry, as it is the efforts of home economists to make cleaning, cooking, and other forms of household maintenance and production more efficient, giving us things like Cheaper by the Dozen.

But since I have come to accept that my life had become unmanageable, have come to accept my higher power, and began working the steps with you all, I have been wondering, How should we think about and examine efforts to remake society in innovation's name?

Screen Shot 2017-10-17 at 9.24.19 AM.png

As I've pondered this question and attended InnoAnon meetings like this one, my mind has returned again and again to a book I read when I was young and pretentious. In the Fifth part of Volume 1 of the History of Sexuality, Michel Foucault Foucault lays out his notion of biopower. He writes that biopower, or a bio-politics of the population, and discipline, or an anatomo-politics of the human body, constituted “two poles of development linked together by a whole intermediary cluster of relations.” So, in this simple picture, at the top of the pole, we have statistical and other methods for viewing and steering the entire population, and then at the bottom, we have the “micro-physics of power” centered on disciplining, forming, and training the individual human.  In his late work, Foucault would add to this picture by examining ethics, or methods of self-discipline or self-transformation, which often took the form of resistance

Foucault obviously focused on the place of sexuality in this scheme, but as we know well his framework needn't be limited to that case. After all, innovation is a wonderful example of the modernity that Foucault described as power “bent on generating forces, making them grow, and ordering them.” (136)

Now, what I want to argue simply is that, if we view biopower and discipline and ethics as operating on different levels and we examine this “whole intermediary cluster of relations,” what we find are further levels of power, action, transformation, and reform. For now, I will suggest that two additional levels of power or transformation are essential for thinking about innovation, both are meso-level constructs, namely the level of the organization and the level of the region or locality. Now, our InnoAnon meeting today is focused on regional innovation policy. While telling the story of my fall and rise, I first want to lay out a general framework for thinking about the role that innovation-speak plays in our society. I'll be addressing these levels in no particular order, before turning to the crucial role that regions and localities play in the ideology of innovation. 

1. INNOPOWER
Increasing use of the word innovation is a distinctly post WWII phenomenon. As scholars like Benoit Godin have shown, the intellectual roots of innovation-speak are many and varied, but particularly important for this story, I believe, is the rise of the economics of innovation. A turning point came in the late 1950s when economists, like Robert Solow and Kenneth Arrow, began studying national economic statistics, which had only become available since the great depression. Traditional factors of the population, like labor and capital, simply could not explain economic growth, and these economists hypothesized that growth was caused by technological change, or innovation.

In the late 1970s, the term “innovation policy” began gaining purchase in Europe and the United States. Now, “innovation policy” put it in clearer and clearer terms that the point of government action, policy, rules, laws, tax codes, etc., etc. should be to induce innovation.  Numerous statistical indicators of innovative activity have been proposed during this period, from measures of R&D spending to patent counts, from hoodie sales to the number of squiggles put on white boards and post it notes. Increasingly, innovation became a panacea, a solution to all problems, including stagnant productivity, a declining middle class, multigenerational poverty, opioid addiction, and too many working-age white guys doing nothing but playing video games.

2. DISCIPLINE
If macro-scale of national measures and policy-making has focused on the puzzle of innovation, one solution has often been clear: making the nation’s and globe’s children more innovative by getting scientific and technical skills into their bodies and minds; by teaching them “design thinking”; by making them makers and hackers and sending them to “coding camp.”

Such solutions have been deemed necessary by influential publications such as Paul Romer’s 1990 article “Endogenous Technological Change” which has been cited nearly 25000 times, and which recommends public subsidization of “accumulation of total human capital,” which basically meant science and engineering programs. The National Academy of Sciences in 2007, titled Rising Above the Gathering Storm: Energizing and Employing America for a Brighter Economic Future.  The board that wrote the report was dominated by corporate CEOs and university presidents. They argued (self-interestedly) that the United States needed to spend more on technical research and get more young people into science and engineering. Beginning in the early 2000s, leaders at the National Science Foundation began using the self-serving term “STEM education” and touting its importance for national well-being.

Screen Shot 2017-10-17 at 9.30.22 AM.png

3. ETHICS
Not only do people seek to form their children in the image of great innovators, they seek to remake themselves in this image too. As Eva Illouz has taught us we can think of self-help books as a technology of self-transformation. In this light, the business sections of bookstores and airport book stalls are full of little else than self-help books that will assist you in becoming more innovative, more disruptive, more agile, leaner, better. (Here slides were shown of business self-help books, including DeGraff’s Innovation You, O’Loghlin’s Innovation is a State of Mind, and ending with Whitney Johnson’s Disrupt Yourself: Putting the Power of Disruptive Innovation to Work.)

Screen Shot 2017-10-17 at 9.24.45 AM.png

I thought this book might be about slamming a saucy mexican platter after a long night getting hammered with friends, but it turns out to be a boring self-help book.

While it’s easy to make fun of this stuff, I think it’s important for us to confess how it calls to us. Once, when I was hitting bottom, I . . . . (pauses, chokes up) . . . I fantasized about becoming a thought leader. (Whispered: Oh dear God)

4. ORGANIZATIONS
The number of business self-help books offering to turn managers into innovators is only outnumbered by the number of books dedicated to making organizations. Here, the mass of publications is simply too mountainous to even really begin. A Google Books search for “innovative business” gets nearly 2 million hits.  We can examine nearly everything taught in business schools during the last forty years as well as whole libraries of organizational theory books. Since the 1990s, many individuals became enamored with Clayton Christensen’s theory of disruptive innovation. (Which by the way you can hire Christensen via his consultancy named Innosight, if you’d like some really expensive advice.) Recently, however, social scientific analysis of Christensen’s own data does not support Christensen’s own conclusions about disruptive innovation, which not nearly as prevalent or important as Christensen claimed. Put another way, “management science” is typically hooey, but since it too is really a self-help literature that feeds on managers’ anxieties, ambitions, and desires, its lack of veracity could hardly matter.

The desire to become innovative is certainly true of business executives, but it also holds true for universities, as books like Miroski’s Science-Mart: Privatizing American Science, Elizabeth Popp Berman’s Creating the Market University: How Academic Science Became an Economic Engine, and Lawrence Busch’s Knowledge for Sale: The Neoliberal Takeover of Higher Education show us. In hindsight, many changes made to universities since the late 1970s—such as the Bayh-Dole Act of 1980, which allowed researchers to patent discoveries/inventions supported by federal funding—were made to foster and generate innovation. Many of these changes have been harmful, which is what you’d expect from actions done for a false god.

5. REGIONS AND LOCALITIES
This brings us finally to the topic of our meeting today: how innovation-speakers have thought about and attempted to reform regions and localities in innovation’s name. It’s important to realize that at least since Alfred Marshall’s writing on “industrial districts” economists have realized that geography plays an important role in economic growth and industrial activity, and that industrial activity often developed in distinct localities, now called clusters, though why precisely clusters develop remains mysterious. In the US context, when pondering clusters we can think of the auto industry in Detroit, steel industry in Pittsburgh, machine tool industry in Youngstown, OH, etc., etc.

Of course, in the age of innovation-speak, one locality has gained more attention than any other. Interestingly, it was at about the time that the term “innovation policy” emerged that the first books began being published on Silicon Valley. Since that time localities throughout the United States and all over the globe have wondered how they can become the next Silicon Valley. When I delivered a talk called “The Innovation Fetish” at the Sydney Opera House, some Australian politicians were proposing new laws to develop what they called “Kangaroo Valley.”

A number of economic theories, such as “National Innovation Systems,” “Regional Innovation Systems,” and “Innovation Clusters,” have been developed to describe and explain regional economic development.  Policies to increase innovative activity in localities take a number of forms, but perhaps two are the most important. First, we can think of Richard Florida’s popular theory of the "creative class." While the book’s title focuses on a class of people, its normative upshot is that cities should remake themselves to be appealing to hipsters, Apple-users, and other creative types. Florida’s policy recommendations have been picked up around the world, and again he has a consultancy or two that you can hire. Recently, Jacobin published this article suggesting that Richard Florida was sorry because it all turned to be bullshit. But my collaborator Andy Russell and I closely read this article and could find no evidence that Florida was in fact sorry.

The second important type of policy aimed to make regions localities more innovative are basically economic incentives. “Technology-based economic development” is an older term for this kind of policy. Through a mixture of tax policy and subsidization, localities try to foster the creation of new businesses, say through the development of “business incubators.” Interestingly, if you search for the term “technology-based economic development,” what you find are slides from consulting companies, like Tedco and Batelle. Selling these kinds of ideas to clueless local officials is big business. Then the officials sit scratching their heads after building an incubator and seeing it sit empty, moldering.

Screen Shot 2017-10-17 at 10.17.16 AM.png

In 2014, the Brookings Institution generated a new version of this basic technique, called the “innovation district,” and that idea has been picked up by localities all over the country, perhaps even around the globe. I do not make predictions, but if I was a betting man I would put my money against these things. If they do fail, I think the only good thing about them is that they will be a kind of conceptual art or physical comedy, like Charlie Chaplin imitating Clayton Christensen, monuments to human foolishness and frailty.

Conclusion
Beyond the intellectual reasons for studying innovation, I think critical innovation studies is also important for efforts to resist the kinds of changes made in innovation’s name. To the degree, that innovation-speak leads us to overly focus on novelty and the new and ignore other realities, like maintenance, it is an intellectual lie; to the degree that it allows us to pretend that innovation alone will solve important social problems, it’s a moral lie; and it’s also not clear that it’s effective even on its own terms. We’ve been doing innovation policy since the late 1970s. Are we currently experiencing a period of staggering economic growth? No. Just the opposite. And yet we are changing fundamental institutions in our culture, like universities, in ways that really damage them.

There are many people in our communities working on these topics today. The people speaking at this meeting; Benoit Godin and Matt Wisnioski writing about the history of innovation; Christina Dunbar-Hester working on maker communities, and many more. But I also believe that we can study this topic more systematically. To that end, Matt W. and I are forming something called the CRIMES Lab at Virginia Tech: CRIMES stands for Critical Innovation, Maintenance, and Engineering Studies.

All I can say about InnoAnon is thank God there’s a community like this one. Keep coming back; it works if you work it.

Stevens and the Gianforte Problem

This spring, Stevens Institute of Technology, the university where I work, will name a new building, sometimes referred to as the “Academic Gateway,” after Greg Gianforte, who has given the university $20 million dollars.

Here are some of Mr. Gianforte’s proud accomplishments:

  • He is a billionaire or millionaire, depending on who you ask. He began his career at AT&T’s Bell Laboratories before going on to found a number of successful companies, including RightNow Technologies, a maker of customer relationship management software.
  • He is a politician. In 2016, he ran unsuccessfully to be the governor of Montana, and he is currently running in a special election for the US Congress.
  • Gianforte has donated at least hundreds of thousands of dollars, possibly more, to anti-LGBTQ organizations, including groups that fought against gay marriage and support discredited and anti-scientific forms of “gay conversion therapy.” One of the organizations, the New Jersey Family Policy Council, claims, “Scientific research reveals that children are not simply born gay, and New Jersey victims deserve the right to receive treatment as the result of unwanted (same-sex attraction) brought on by pedophiles.” Another of the organizations has been identified as a hate group by the Southern Poverty Law Center. Gianforte also led an effort against a proposed non-discrimination ordinance that would protect LGBTQ individuals in Bozeman, Montana, claiming that such a law was unnecessary and that, if a law was passed, it should add a sexual orientation category of “ex-homosexuality.”
  • Gianforte has also made donations to Turning Point USA, which, among other feats of generosity, started the website Professor Watchlist, which creates lists of left-learning professors. Gianforte is hostile to values of academic freedom, or at least has never bothered to publicly distance himself from attacks on it.
  • Gianforte helped found the Glendive Dinosaur & Fossil Museum, including by donating the museum’s T. rex and acrocanthosaurus exhibits. The museum puts forward the Young Earth Creationist view that Earth is merely thousands of years old and that humans and dinosaurs lived together at the same time.
A scene from a future Stevens Creation Science Center? Perhaps. This is a model at the Glendive Dinosaur & Fossil Museum, in Glendive, MT, which Greg Gianforte funded. The museum puts forward the view that humans and dinosaurs lived at the same …

A scene from a future Stevens Creation Science Center? Perhaps. This is a model at the Glendive Dinosaur & Fossil Museum, in Glendive, MT, which Greg Gianforte funded. The museum puts forward the view that humans and dinosaurs lived at the same time. In this image, a pretty tough-looking prehistoric dude has apparently domesticated a triceratops and is riding it around like a total badass.  Another diorama depicts dinosaurs milling around Noah’s Ark. The dinosaurs didn’t make it onto the ark, which explains why they are no longer with us, the poor drowned bastards.

On April 3, 2017, a group of present and former Stevens students sent out a petition raising questions about the school’s decision to name the building after Gianforte. The petition brought to a boil concerns, worries, and anger over the university’s involvement with Gianforte that had been simmering for months—in some circles, years. (Stevens isn’t even the first university where such concerns have been raised: in 2014, students and faculty at Montana Tech protested after their administration chose Gianforte as their graduation commencement speaker.) In December 2016, the Faculty Senate—the highest-level faculty governing body at Stevens and of which I am member—expressed apprehension about the naming of the building and questioned why Stevens administration had not consulted the faculty about this decision. The Senate failed to take further action, however. (I take partial responsibility here.)

The students’ petition began a spirited debate at Stevens, particularly on faculty email lists. Some faculty, including myself, supported the students and argued that the situation required further discussion. Other faculty reacted with outright hostility. George Calhoun, a professor at Stevens School of Business, wrote, “I’ve been wondering when this sort of aggressive political correctness would reach Stevens—as it has now infected so many other colleges in this country.” He accused the students of proposing an ideological “litmus test,” of holding an “illiberal and intolerant perspective,” and of being like the controversial students at Yale, Berkeley, and Middlebury College.

On April 4, 2017, Stevens President Nariman Farvardin sent an email to specific groups addressing the brewing controversy. In some ways, Farvardin’s email was pretty good. It expressed unequivocal commitment to academic freedom, diversity and inclusion, and the pursuit of scientific truth. It also claimed that Gianforte did not mean to shape scientific research or discrimination policy at the school and that the school would not take money from any person or group who sought such influence. But in other ways, Farvardin’s email was quite weak and didn’t even begin to address the concerns or requests made in the petition or subsequent discussions. I will discuss these shortcomings in the rest of this post.

Before addressing these problems, however, we should acknowledge the more general context: Diversity isn’t Farvardin’s strong suit. In fact, he’s lousy at it. The Stevens student body features some pitifully small number of African Americans (~2–3%), far away from the national population (12.3%), let alone that of cities in Stevens’ immediate neighborhood (New York–25.1%; Jersey City–28.32%; Newark–53.46%). The student body is 70% male, 66% white. The faculty has few women, even fewer minorities. Yet, Farvardin rarely, if ever, makes diversity a primary point of his public appearances, preferring to ruminate once again on the university’s place in college rankings and, as frequently as possible, repeating the phrase, “return on investment.”

Recently, when Farvardin introduced the diversity and racial equity expert Shaun Harper at a public event, he focused his remarks on the few times the university held other discussions about diversity instead of being an honest broker and stating frankly that the university has a problem. Occasional public statements, like the university’s “strategic plan,” spend a few nice words on the value of diversity, but the school has not meaningfully changed the way it recruits or supports students, faculty, and staff or created resource-intensive diversity-centered programs to demonstrate that it will match words with action. As a colleague recently put it, “You can fit all of the black women who work on this campus—administration, faculty, and staff—into the president’s quite small conference room.”

Moreover, it has become increasingly clear over the past few years that the Stevens family not only owned slaves but also was actively involved in the slave trade. There is every reason to believe that wealth and profit won from this exploitation was rolled into the founding of the university. When Shaun Harper gave his talk on campus, he noted that minority students at colleges wanted the names of slaveholders removed from buildings and organizations. This task would be tough for Stevens because it would have to change its very name. But other prominent universities—including nearby Rutgers—have begun major research projects examining the role of slavery in their histories and reflecting on what that history and current inequalities mean for their present. Faculty members, including me, have suggested such a research project at public gatherings, which members of Farvardin’s administration attended. But they have done nothing about it. Instead, the president, other campus leaders, and the university’s website continue to peddle a Disneyfied version of Steven’s history, which focuses on the family paterfamilias, Col. John Stevens, his two sons, Robert and Edwin, and how they made nifty inventions together—true innovators all, no doubt. Never mind that more of the family’s wealth may have been won from real estate and exploiting enslaved black humans than ever came from technological entrepreneurship. Farvardin hasn’t even expressed curiosity about the slavery research, let alone taken action on it.

It’s this inability or unwillingness to move beyond empty rhetoric about diversity that raises the most serious questions about Farvardin’s email. In my view, there are four basic problems with it.

First, the president’s email was targeted. It was sent to select individuals and groups, apparently including those people who signed the petition. So far, the president has avoided making any broader statement of principles. This is the kind of thing I’m gesturing towards when I describe Farvardin’s habitual failure to act in the name of diversity and other sensitive issues: he demurs from taking a stand before the entire campus community, let alone the broader public. For instance, there are alumni who for a variety of reasons didn’t feel they could sign the Gianforte petition but who are quite concerned about this issue, and they have heard nothing from Farvardin. The president’s narrow communication to a few people instead of all can look like an attempt to cool discussion and prove it unnecessary rather than open it up. The president should take a stand, and while he’s at it, he should make a moral argument for why the first new major building at an institute of science and technology should be named after man who attacks gay rights, who denies scientific truth, and who funds groups that try to shut down academic freedom. If the president’s argument is solely, “Because $$$$,” he will have made a kind of argument but a feeble one. 

This brings us to the second problem with Farvardin’s email: it does not address any of the concerns raised about the symbolic and long-term risks of naming a building after someone like Gianforte. This is particularly true given that this new building has traditionally been referred to as the “Academic Gateway” and has been intended to be a new public face and welcoming point for the university.

No one, it seems, would doubt that there should be some limits on who universities should take money from and name buildings and centers after. If Vladimir Putin gave $500 million to Stevens and asked the school to create the “Putin Freedom of Speech Center,” community members would be seriously concerned. Stevens administration has chosen to publicly celebrate and name its new building after a man who has worked hard to oppress the rights of fellow human beings, or at least that is how a significant portion of the population may see it, including potential donors and potential future students, especially LGBTQ ones. The administration has chosen to enshrine the name of an individual whose values are directly antithetical to inclusion, academic freedom, and true scientific inquiry.

Here, I have encouraged colleagues to take the long view: fifty and sixty years ago, many people in this nation were advocating racial segregation and miscegenation. We now see the actions of such individuals—like Strom Thurmond—as morally reprehensible. There has been a major shift in thinking about gay rights, including marriage rights, over the past decade. The highest court in our land has asserted that marriage is a right for all. Young people accept gay marriage in a way that their parents and grandparents did not. We have every reason to believe that this trend will continue and grow stronger.

What will a Gianforte building look like in 25 or 50 years? What kind of Academic Gateway will it be? What will it symbolize? The Strom Thurmond Gateway Greeting Center? We welcome you.

Business professor George Calhoun asserts that holding controversial views is the only thing Gianforte is ‘”guilty’ of.” But I think this statement assumes an extreme form of moral relativism that most people don’t actually buy into. By logical implication, the only sin of defenders of slavery, like John C. Calhoun and George Fitzhugh, or proponents of segregation and miscegenation, like Thurmond, is that they had “controversial views.” Few outside white nationalist circles believe this.

Moreover, Mr. Gianforte is not merely a private citizen who holds certain views. He is a politician who has pushed vigorously for certain causes, including by marshaling his significant wealth. If Gianforte is elected to the US Congress, Stevens may very likely be in the awkward position of being an institute of science and technology with a building named after a Congressman who is actively attacking science and engineering funding and policy, who is working to rollback the rights of its students, and who is walling out the foreign, immigrant students on which the school's business model completely depends.

The third problem with Farvardin’s email is that it does not address any positive steps the university can take to strengthen campus diversity in the context of taking money from someone like Gianforte. Here, more than more hot air is needed. Hotter air is not enough. In their petition, the students demanded that Stevens administration “reaffirm their commitment, in actions and not just words, to be an inclusive campus regardless of race, gender, or sexual orientation.” (emphasis added) Community members have put forward a number of ways the university could do this, including by working to hire more LGBTQ faculty and starting a new queer studies program. The option that has been mentioned most often is to open a new LGBTQ and diversity center and place it prominently, front-and-center in the new building. (Putting a statue of Charles Darwin in the building’s lobby would also be a nice touch.)

Finally, the fourth problem is really a kind of meta-issue: there are deep and troubling questions about how the administration reached the decision to name the building after Gianforte without, it seems, consulting anyone in the wider campus community. The students raise this issue in their petition when they request “the administration publicly explain their reasoning behind accepting the donation and honoring Greg Gianforte in light of what he stands for.” (emphasis added) The Stevens Faculty Senate also raised concerns about the decision-making process in its December 15, 2016 meeting, which took place two days after Farvardin announced the building’s name. As meeting minutes note, “The Senate expresses deep concerns about naming the new building the Gianforte Academic Center without consulting with the Senate first (shared governance). The Senate is afraid that this [naming decision] may deter future donors or students from attending Stevens. The Senate is also concerned that this may negatively impact the academic freedom and the work environment of our faculty.”

The Senate further noted that the Stevens administration consulted the Senate when choosing commencement speakers and awarding honorary degrees but not when naming this building. The way this decision was handled reflects a wider pattern with Farvardin, who has to be reminded again and again to include faculty in committees and decision-making processes. This closed-door decision process is good example of the top-down mode of corporate governance that Farvardin prefers, a mode that abhors democratic participation and honest intellectual engagement, a mode that sees no need for light because it fears light.

This autocratic style and lack of open discussion evokes darker days. Stevens’ last president, Hal Raveche, stepped down and left the university under a shadow of financial scandal and corruption. Farvardin’s entrance supposedly ushered in a new age of modernization and openness. The decision-making process for naming the Gianforte building clearly violates the spirit of transparency that was meant to mark the Farvardin era.

I am not pretending that this issue is simple or even clear. There are reasonable people with coherent arguments on every side of it. At the extreme ends, some individuals believe that Stevens should return Gianforte’s money, while others think that the naming is a non-issue not even worth talking about. They point, for instance, to famous cultural institutions that have taken money from and named themselves after the controversial Koch Brothers seemingly without problem. In middle positions, some believe that the school should keep the money and the naming but change the process by which such naming occurs. Others agree with this position but add the argument that the school must now take the kinds of proactive steps listed above (new LGBTQ center, new faculty, new programs, etc.). As one of my colleagues put it, “Every dollar this guy pours into putting brick and mortar on college campuses is a dollar he can’t spend trying to oppress other human beings.” Still others argue that Stevens accepted Gianforte’s money without strings attached, that it is under no contractual agreement to name the building after him, and that Farvardin has decided to do so purely by fiat.

All of these positions and more are worth considering and listening to. What is wrong, however, is not having the discussion in the first place. What is wrong is avoiding debate. As Kyle Gonzalez, one of the authors of the petition, wrote in a later email to faculty, “What concerns me most is the silence and absence of information surrounding all of this.” Likely our leaders have avoided such conversations because they do not want to offend their patrons, hoping instead that no one would notice—or at least dare to mention— the nature of the deal going down. If this is right, Farvardin and his people have too deeply bowed before external powers. They should get off their knees.

The students who created the petition would like to see President Farvardin and his administration publicly explain their reasoning for naming the building and reaffirm their commitment to diversity, academic freedom, and scientific truth. They would like the school to take proactive measures to bolster its diversity activities, in actions and not just words. They would also like the school to hold a public forum where people can raise questions, express opinions, and engage in real discussion. These do not seem like unreasonable demands.

When the dust settles from this moment, some serious questions and reasons for reflection will remain:

First, the Gianforte controversy has brought to light that the Stevens campus has a pervasive culture of fear—especially a fear of reprisal and retaliation. Teaching professors and tenure-track assistant professors have asked colleagues whether they could sign the petition only to be told that doing so would threaten their livelihoods. Other untenured professors have declared that they would not take a public position on the matter because they are too afraid and vulnerable. Obviously, taking money from a man who funds attacks on college campuses and professor watch lists does not help this situation. If Stevens would like to go from being a polytechnic to a full-fledged, robust university, it will have to start acting like one, including by fostering the right to dissent, to raise tough criticisms, and to engage the administration in vigorous debate.

Second, the way this situation has played out raises real questions about the university's leadership. Universities around the country are involved in intense debates about the value and meaning of diversity and about what role donor money should play in shaping their campuses. Characters like George Calhoun lump all of these discussions under the term "political correctness," but that is only because they lack subtler concepts in their intellectual toolsheds. There's more going on than any simple label can cover. Our culture is involved in a deep conversation about what it can and should be for all of its members. Stevens administrators have squandered an opportunity to join that conversation meaningfully. In this larger context, the clumsy way Farvardin and other leaders have handled this situation—the fact that they have failed to actually lead—truly beggars belief. Sadly, in this case, the administrators of an institution that has trademarked the name "The Innovation University" are well behind the times.

CFP FOR SHOT 2017: THINKING WITH ANN JOHNSON

The historian, philosopher, and sociologist of technology, Ann Johnson, died far too young on December 11, 2016. Ann was a remarkable person. She was smart and made fundamental contributions to our fields. She was also good. She put a great deal of time and energy into helping others—both her students and her peers—and she cared fundamentally about justice, particularly around gender and LGBTQ issues. Those who knew Ann miss her greatly.

Arrangements are currently being made to hold a memorial gathering for Ann at the 2017 meeting of the Society for the History of Technology in Philadelphia (October 26-29, 2017). In addition, a group of us will be putting together one or more panels—depending on interest—celebrating Ann’s thought and influence and articulating how her work opens doors for future inquiry. We welcome thoughts and paper proposals. Currently, a number of people will present papers on how Ann’s work relates to a variety of topics, including Allison Marsh on disability studies, David Brock on the notion of “knowledge communities,” Adelheid Voskuhl on Ann’s interest in philosophy, and Patrick McCray on emerging technologies. I will probably give a paper on how Ann gave us theoretical tools for moving beyond the cul-de-sac of “social construction.”

If you have thoughts or would like to propose a paper, you can write any of the people above or drop a note to me at leevinsel@gmail.com

Peoples and Things: An Introduction to Technology Studies Syllabus

Today is my birthday, and I spent it having a lot of fun (truly) jotting down a syllabus that has existed in my head for nearly two years. I call the class "Peoples and Things: An Introduction to Technology Studies." In many ways, I created this course out of frustration: many disciplines and academic fields that I have been tracking since graduate school—including anthropology, economics, history, political science, psychology, sociology, management and organizational studies, and Science and Technology Studies (STS)—have made great strides in the study of technology over the last fifty years. And, yet, there is no existing synthesis of this work. Primers and textbooks that I really like, including Sergio Sismondo's An Introduction to Science and Technology Studies, have little or nothing to say about technology but, instead, focus mostly on science. Moreover, these fields and disciplines largely ignore each other. (But that is the pleasure and joy of such boundaries, is it not?) Finally, in at least one vision of STS, it was meant to be an interdisciplinary meeting place for researchers focused on S&T, but that vision has largely ended in failure. STS scholars generally ignore economics, let alone psychology or quantitative political science, and researchers from these other fields are largely absent from STS gatherings. Much STS work on technology focuses on how this or that thing was "constructed" (really? still?); much of it centers on "emerging technologies" and the future; and, for all of these reasons, a great deal of it is superficial. Sad.

At some point, I realized that, if I wanted a synthesis of Technology Studies, I would have to build it myself. Thus, the plan for this course was hatched. My hope is that one day this work will be encapsulated in a book. Indeed, I believed that Peoples and Things would be my second book until another project called The Maintainers took over my life. So, for now, it is just a class with the hopes that the notes I take and lectures I write will one day lead to something grander.

The course features 13 substantive weeks (with one week for midterm exams). The planned book has more chapters than there are weeks in the course, so for now I will swap out subjects each time I teach the class. Two important topics that I have left out this time around are a) the relationship between technology and war and b) the environment and "envirotech." Perceptive readers will realize that I am building outward from the history of technology, which is a necessary function of my training. My hope is that, eventually, each week in the course will take into account what a set list of disciplines and fields have said about that topic. There are still things about the class that make me unhappy. Like, it is too dominated by dudes, especially white dudes. In my defense, I would say that it is not nearly as white-guy-centric as other intro to STS syllabi I've examined online, but it is still something I hope to change in the future.

If anyone has thoughts or alternate readings, I would love to hear about them.

Here's a PDF of the syllabus, which includes "The Zombie Scale of Classroom Participation," but for ease of use, I have copied the schedule with descriptions and weekly readings below.

Course schedule

Week 1—Technology: What’s to Explain? Why Explain It?: Technology is important, right? Well, maybe. Yet, the word “technology” was not widely used until the 1930s, and the study of technology is relatively young. In this first week, we will explore definitions of technology and the history of technology studies, with special focus on what we hope to explain about technology in the first place. In the first lecture, I will lay out what this course is and how it will work. In the second lecture, I will outline how individual academic disciplines started thinking about technology. Thinkers covered will include the proto-archaeologist Christian Jurgensen Thomsen, the anthropologist Lewis Henry Morgan, the economists Karl Marx and Joseph Schumpeter, the psychologist Hugo Munsterberg, the sociologist William Ogburn, and the historians Lewis Mumford and Siegfried Giedion.

Eric Schatzberg. "Technik comes to America: Changing meanings of technology before 1930." Technology and Culture 47, no. 3 (2006): 486-512.

Nightingale, Paul, What is Technology? Six Definitions and Two Pathologies (October 10, 2014). SWPS 2014-19. Available at SSRN: http://ssrn.com/abstract=2743113 or http://dx.doi.org/10.2139/ssrn.2743113

 

Week 2—Affordances and Social Networks: In this course, I will argue that we can get a lot of mileage out of focusing on just two basic ideas, affordances and social networks. In the first lecture, I will outline the psychologist James J. Gibson’s notion of “affordances,” which are possible courses of action that creatures perceive in their ecological environments. Among other things, the beauty of the affordance idea is that it will allow us to dodge the problem of defining technology that we encountered in the first week. The notion also opens up a slew empirical approaches, from asking people questions to observing their behavior. In the second lecture, I will connect this idea to the world of social networks. I will argue that people’s relationships to affordances are highly dependent on which social networks they belong to. To explore this idea, we will examine research on how animals learn to use tools, including how Israeli roof rats learn how to open pinecones and how chimpanzees take up “termite fishing.”

William W. Gaver "Technology affordances." In Proceedings of the SIGCHI conference on Human factors in computing systems, pp. 79-84. ACM, 1991.

William H.  Warren "Perceiving affordances: visual guidance of stair climbing." Journal of experimental psychology: Human perception and performance 10, no. 5 (1984): 683-703

Bennet G. Galef, Jr., “Social Learning by Rodents” in Rodent Societies: An Ecological & Evolutionary Perspective, eds. Paul W. Sherman and Jerry Wolff, 207-215.

Charles Kadushin, Understanding Social Networks: Theories, Concepts, Findings, pg.’s 3-26.

 

Week 3—Hierarchy and Segregation: Technologies and other material realities are tightly inter-coupled with all kinds of social inequalities. In lecture 1, I will examine an argument that goes back to Jean-Jacques Rousseau, Lewis Henry Morgan, and Friedrich Engels, which holds that social inequality arose from one of the most important technological “revolutions” in human history, namely the birth of agriculture. We will learn that—no surprises—contemporary thinkers say, “Hey, it’s more complicated than that.” But they also still believe that inequalities and social hierarchies have a lot to do with how wealth and power are distributed. We will trace this history forward to contemporary anxieties about economic inequality. Inequality is often expressed through segregation, and Lecture 2 will examine segregation both in what kind of work we do and where we live. The habits, skills, and relationships to affordances that end up in our bodies and minds depend in large part on who we are and where our networks fall in social hierarchies. Topics will include the “gender division of labor,” the history of racialized work extending back to slavery, and the long, long, long history of segregated housing.

Friedrich Engels, The Origin of the Family, Private Property, and the State, excerpts

Yu Tao and Sandra L. Hanson, “Engineering the Future: African Americans in Doctoral Engineering Programs”

Ruth Cowan, More Work for Mother, Introduction

Carl H. Nightingale, Segregation: A Global History of Divided Cities, Chapter 1, “Seventy Centuries of City-Splitting”

 

Week 4—Diffusion, Adoption, Consumption, Use: Authors of books on technology often begin with the topic of invention because, they reason, invention marks the beginning of technologies. But that approach is kind of crazy because, in fact, all of our lives begin in media res, or in the middle of things. We are born into a world full of objects that we then learn to use. This week will build on the thoughts from Weeks 2 & 3 to explore, in lecture 1, the study of use and, in lecture 2, the diffusion and adoption of things, which at both the organizational and individual levels has loads and loads to do with social networks.

Edgerton, David. "From innovation to use: Ten Eclectic Theses on the Historiography of Technology." History and Technology, an International Journal 16, no. 2 (1999): 111-136.

Charles Kadushin, Understanding Social Networks: Theories, Concepts, Findings, pg.’s 135-139.

Thorstein Veblen, The Theory of the Leisure Class, a two-page excerpt.

Mark Thomas Kennedy and Peer Christian Fiss. "Institutionalization, framing, and diffusion: The logic of TQM adoption and implementation decisions among US hospitals." Academy of Management Journal 52, no. 5 (2009): 897-918.

 

Week 5—Invention: Depending on how we define the term “invention,” it is either quite common or quite rare. In this class, we will think of invention as the introduction of new affordances. Lecture 1 will consider the biological and psychological underpinnings of problem-solving and creativity. We will begin by considering how ethologists have studied animal behavior around problem-solving and will spend a lot of time talking about crows, octopi, and non-human primates before turning to human primates and their penchant for self-aggrandizement. Lecture 2 will examine how institutions and other social factors have given rise to a culture and cult of invention since, say, 1700.

Carlson, W. Bernard, and Michael E. Gorman. "A cognitive framework to understand technological creativity: Bell, Edison, and the telephone." Inventive minds: Creativity in technology (1992): 48-79.

Joel Mokyr, The Enlightened Economy: An Economic History of Britain, 1700-1850 (2009), Ch. 5, “Enlightenment and the Industrial Revolution.”

 

Week 6—Organizations: Modern technologies go hand-in-hand with bureaucratic organizations, which both produce technologies and use them. Lecture 1 explores a theory first put forward by the business historian Alfred Chandler, who argued that the modern, M-form corporation arose around certain large and capital-intensive technologies. This thesis has not fared well with subsequent thinkers, who have asserted, for instance, that corporations had just as much to do with extending social control or, alternately, that the M-form was just an intellectual fad that had little to do with economic or technological reality. At the same time, Chandler’s explanation works pretty well in some cases, which have to do with stable demand structures. Lecture 2 offers brief exegeses on two topics: first, I’ll look at the role that communications and organizational technologies play in bureaucracies, including a history that takes us from the filing cabinet to the kinds of complex logistics systems used at Wal-Mart and Amazon. Second, I will hammer home the point that technologies play a role in all “organizations,” including social movements, and not just big ones, like companies. This includes the use of communications technologies in social uprisings, such as the use of printing presses in the American Revolution and social media in the so-called Twitter Revolutions. I will describe how activists used mimeograph machines, personal automobiles, and buildings called “churches” to undertake the Montgomery bus boycott. 

Alfred Chandler, The Visible Hand, Introduction

Stalk, George, Philip Evans, and Lawrence E. Shulman. "Competing on capabilities: the new rules of corporate strategy." Harvard business review 70, no. 2 (1991): 57-69.

Week 7—Midterm Week

Week 8—Systems, Infrastructure, Maintenance: A lot of early thinking focused on individual instances of technology, or what are sometimes called “artifacts.” Later theorists argued that this approach isn’t helpful for thinking about all of the interconnected technologies around us, or what we call “systems.” In the first lecture, I will outline theories about systems and infrastructure, and I will look examples, like electrical power systems and the complex networks of satellites and computers that we use to study global climate change. The second lecture will focus on recent work on maintenance and repair.

Hughes, Thomas P. "The evolution of large technological systems." The social construction of technological systems: New directions in the sociology and history of technology (1987): 51-82.

Lara Houston, “Unsettled Repair Tools: The ‘Death’ of the J.A.F. Box”

Russell and Vinsel, “Hail the Maintainers”

 

Week 9—Industries, Professions, Standards: The fates and life-cycles of technologies are deeply interwoven with the rise and fall of social structures that we call industries. In the first lecture, I will trace developments in various strands of thought, including economic history and Neo-Schumpeterian economics. These thinkers argue that industries usually go through certain dependable life-cycles, the shape of which we will explore. Lecture 2 will examine professional groups, especially engineering societies, and standardization organizations. We will use the historian Ann Johnson’s notion of “knowledge communities” to think through how engineering knowledge grows (and doesn’t grow) and how these groups manage to standardize nearly every conceivable thing around us, except for what they can’t.

Steven Klepper, Experimental Capitalism: The Nanoeconomics of American High-Tech Industries (2016), Ch. 2, “Once Upon a Time”

Robert Gordon, The Rise and Fall of American Growth: The U.S. Standard of Living Since the Civil War (2016), Preface and Introduction, “The Ascent and Descent of Growth.”  

 

Week 10—Politics, Policy, Regulation: Many contemporary academics will tell you, “Hey, man, everything is political!” and they will argue that “everything” includes everything we have covered in this class so far. Such potential quibbling is, of course, simply a matter of definition. In this week, we will cover capital-P, or formal, politics, including things like voting, political parties, and the workings of the various branches and levels of government. Government, it turns out, has always had an enormous influence on the development and use of technologies (libertarians be damned).

David M. Hart, Forged Consensus: Science, Technology, and Economic Policy in the United States, 1921–1953, Ch. 1, “The Malleability of American Liberalism and the Making of Public Policy”

Vinsel, Lee Jared. "Designing to the test: performance standards and technological change in the US automobile after 1966." Technology and culture 56, no. 4 (2015): 868-894 . . . . . or alternately, Vinsel, “Focus: A Theory of Regulation and Technological Change,” if I have gotten around to it yet.

 

Week 11—Accidents, Disasters, and Terrorism, Oh My: Accidents, large-scale technological (and “natural”) disasters, and terrorism seem to be part and parcel of living in modern societies with enormous, interconnected, interdependent systems. A literature, which goes back at least to the 1980s, argues that such events show us a great about technology and human life with it. Yet, at least one of our authors will argue that, if disasters have anything to “teach,” we do not “learn.” Lecture 1 will focus on disasters and accidents, with special attention paid to so-called “natural disasters” and car crashes. Lecture 2 will examine terrorism, including a brief history of the car bomb. (Self-driving cars will make great bomb delivery devices, won’t they?)

Knowles, Scott Gabriel. "Learning from Disaster?: The History of Technology and the Future of Disaster Research." Technology and Culture 55, no. 4 (2014): 773-784.

Fortun, Kim. "Ethnography in late industrialism." Cultural Anthropology 27, no. 3 (2012): 446-464.

Skim: Richard Little, “Managing the Risk of Cascading Failure in Complex Urban Infrastructures”

Skim: Moghadam, Assaf. "How al Qaeda innovates." Security studies 22, no. 3 (2013): 466-497.

 

Week 12—Culture: Culture is a famously complex concept, which includes social phenomena like beliefs, rituals, ideas, practices, and ways or forms of life. Just like the academics who will remind us “Everything is political!” any self-respecting student culture is going crash our party to inform us—quite condescendingly no doubt—that we have been talking about nothing-but-culture all semester long. True, true. But this week will focus on the so-called “cultural history of technology,” which has been one of the fastest growing and most intellectually exciting areas of technology history over the past two decades. In practice, cultural history has focused on examining how people think about technology, rather than on how technology changes or is used. In Lecture 1, I will outline the theoretical perspectives undergirding cultural history, and I will give examples of the many fascinating things people have been exploring using this family of approaches. I will also argue that cultural history has real limits, that we are already bumping into them, and that we need to place the tools of cultural history in the context of older methods—like building economic statistics and counting dead people—which scholars have found very, very, very boring for decades now. Lecture 2 will present a cultural historical case study of how policy-makers in the USA have thought about technology in the Post-WWII period, ranging from the “linear model” to the current scene of “innovation,” “STEM Education,” and whatnot.

Voskuhl, Adelheid. "Motions and Passions: Music-Playing Women Automata and the Culture of Affect in Late Eighteenth-Century Germany." (2007).

Patrick McCray, “California Dreamin’: Visioneering the Technological Future”

 

Week 13—Senses, Space, Time; Thinking, Things, and Thinking Things: In this week, we will consider how technologies connect with thinking and sensing, including our perceptions of space and time. In the first lecture, we will begin with Ernst Kapp’s arguents from the 19th century that technologies are “organ projections,” or extensions of the human body, especially of the body’s senses. We will furnish lots of examples, like microscopes and telescopes and what have you. We will learn that our ideas of time are deeply interwined with technologies of time-keeping.  Finally, we will examine long-standing arguments that technologies erase space and compress time. It turns out that such arguments are hard to substantiate and separate from an enemy of thought known as “nostalgia.” We will look at scholars who have attempted to study the issue empirically.  In the second lecture, we will look at how tools interconnect with and aid human thinking, including ideas that our tools are part of an “extended mind.” This will lead us into a consideration of tools, like jeton coins and abacuses, which eventually feeds into the entire history of computing.

Frumer, Yulia. "Translating Time: Habits of Western-Style Timekeeping in Late Edo Japan." Technology and Culture 55, no. 4 (2014): 785-820.

Wajcman, Judy. "Life in the fast lane? Towards a sociology of technology and time." The British journal of sociology 59, no. 1 (2008): 59-77.

 

Week 14—Media: Why has the study of media traditionally been the scene of terribly weak thinking? That question is hard to answer, but the observation remains true nonetheless. In the first lecture, I will beat up on two German guys named Theodor Adorno and Max Horkheimer. I will argue that most of their confusions stem from having no understanding of how media industries actually operate (as well as no defensible psychological theory of how humans work). We will then discover that, hey, we are in luck because other thinkers have been doing wonderful research on media industries for decades. I will focus especially on the writings of Joseph Turow, who has given us great studies of the rise of target marketing and how target marketing eventually morphed into the online micro-targeting of today. In the second lecture, we will take aim at Marshall McLuhan’s motto, “The Medium is the Message.” First, we will ask, “WTH?” We will find that McLuhan was always wrong. We will explore the history of people doing actual empirical research on media use, and we will discover that human beings nearly always choose media that reinforces their pre-existing worldviews.

Paul Felix Lazarsfeld, Reading TBD with the help of Eric Hounshell.

Explore the homepage of Eszter Hargittai’s Web Use Project and read one article posted there: http://webuse.org/

Charles Kadushin, Understanding Social Networks: Theories, Concepts, Findings, pg.’s 139-148

 

The Maintainers: A Conference, April 8, 2016, Stevens Institute of Technology

Many groups and individuals today celebrate “innovation.” The notion has influenced not only how we think about the present but also how we interpret the past. It has become a concept of historical analysis in both academic histories and popular ones. A recent example is Walter Isaacson’s book, The Innovators: How a Group of Hackers, Geniuses, and Geeks Created the Digital Revolution. When this book was released in 2014, the historian of technology, Andrew Russell, put forward the idea of a counter-volume, titled, The Maintainers: How a Group of Bureaucrats, Standards Engineers, and Introverts Made Technologies That Kind of Work Most of the Time. Since then, various scholars in science and technology studies have entered an on-going conversation about developing a historical research program around the study of maintenance.

There are many reasons to turn to the history of maintenance at this time, and many questions that our workshop will engage. We are not claiming that the study of maintenance is new, especially since we are inspired by formative work of historians, anthropologists, sociologists, labor activists, and many other communities. Rather, we are arguing that this is a propitious moment to turn to this theme.

In this light, we invite proposals for a conference to be held at Stevens Institute of Technology in Hoboken, New Jersey, on Friday, April 8, 2016.  Proposals might engage some of the following questions:

-         What is at stake if we move scholarship away from innovation and toward maintenance?

-         How do matters of innovation and maintenance in digital systems differ from earlier technological systems, such as those that provide water, electric power, rail and automobile travel, and sanitation?

-         How does labor focused on novelty and innovation differ from labor focused on maintenance and conservation?

-         How should studies of maintenance engage scholarship on race, gender, ethnicity, social justice, and economic inequality?

-         What theories, methods, and sources can we use to study maintenance, infrastructure, and myriad forms of technological labor?

-         What should policymakers do to respond to scholarship and activism around maintenance and infrastructure, such as the report cards issued by the American Society of Civil Engineers?

Instead of developing traditional conference papers, participants will be asked to write brief essays (~1,000-3,000 words), which will be due before the conference and will later be posted on a conference website for both scholarly and educational use. (Our model is partly based on the Histories of the Future conference/website: histscifi.com.) Essays that include images, sound, video, and other mixed media are welcome and encouraged (but not necessary).

Deadline for proposal submission: January 4, 2016.

To submit a proposal: Please email an abstract (~300 words and CV) to leevinsel@gmail.com

95 Theses on Innovation

(For Patrick McCray, in Tribute to His Support of Junior Scholars)


Innovation

1.     Innovation is the central ideology of our age. Its core assumption is that technological change is the key to both economic growth and quality of life.

2.     Use of the word “innovation” began rising soon after World War II and hasn’t stopped since. A key turning point came in the late 1970s when the term “innovation policy” took off. Innovation became a resource that could be fostered, grown, created, molded, instrumentalized. In other words, you can instrumentalize instrumentalization. Innovation-speak took off at an even faster rate beginning in the early 1990s. We hear the word today more than any time in history.

3.     There is nothing wrong with the idea of innovation in itself. We know that technological change is an important source of economic development. The problem lies in how we have reshaped our society in the name of innovation. We have corrupted ourselves.

4.     Sober analysts of innovation, like William J. Abernathy, Nathan Rosenberg, and David Mowery, tell us that incremental innovation has always been the primary generator of economic growth. But our society has, unwisely, become obsessed with revolutionary, or “disruptive,” technological change.

5.     The epitome of this focus on radical technical change is Clayton Christensen, a professor at Harvard Business School who has written a series of works on “disruptive innovation.” Disruptive innovation occurs when a new technology or service massively undermines an existing industry, sometimes leading to complete collapse. Christensen’s works and those of his imitators emphasized the importance of disruptive innovation for the economic and technological history.

6.     But recent studies have found that Christensen’s theory is profoundly flawed. Of 77 cases that he used to “prove” his point, only 9 cases actually fit the criteria of his own theory. Disruptive innovation is neither as frequent, nor as important as Christensen led people to believe in the many books he sold and the many talks he gave around the world.

7.     Christensen and his disciples dealt in snake oil for the innovation age. By drawing our attention to falsehoods and things that rarely actually matter, they damaged our culture.

8.     Are you an “innovation thought leader”? You’d make a great Chief Innovation Officer.

9.     The overemphasis on revolutionary technological change has led to a series of false prophets and empty promises. From gene therapy to biotechnology to nanotechnology—waves of jargon and technobabble have washed over us with little payoff.

10.  Given the prevalence of such empty promises, one of our chief tasks must be sounding out false idols. When we sound out contemporary techno-chatter, as Friedrich Nietzsche once put it, we often “hear as a reply that famous hollow sound which speaks of bloated entrails.” You know that “Big Data” is one thundering, odoriferous bout of flatulence.

11.  (If you want to have a little fun, go to a page of a granting agency, like the National Science Foundation and the National Institutes of Health; plug an overhyped, underperforming region of research, like nanotechnology, into the search bar; and watch the hits roll in. LOL! LOL!)

12.  Scholars have shed a lot of ink complaining about “neoliberalism,” an economic and political philosophy that came to power with the rise of Margaret Thatcher and Ronald Reagan. Neoliberalism focuses on increasing free markets and decreasing the scope of government via deregulation, privatization, lower taxes, and similar policies. Yet, innovation is the more basic ideology in contemporary society. Left or right, politicians believe that our goal should be to increase innovation in whatever way we can, often to the neglect of other things.

13.  The Innovation Drinking Game: Once a professor joked to his students that they should use one of Obama’s State of the Union Addresses to play a drinking game, wherein they would take a sip every time the President said “innovation.” That night, he watched the speech. Part of the way into it, as the word innovation flew from the President’s mouth again and again and again, the professor was suddenly overcome with fear. What if his students had taken him seriously? What if they decided to use shots of hard liquor in their game instead of merely sipping something less alcoholic? He had anxious visions of his students getting alcohol poisoning from playing The Innovation Drinking Game and being fingered for their demise. So long tenure!


Fear

14.  The religious and political traditions that supposedly undergird American culture hold that we have a moral duty to reject fear. (“Therefore I tell you, do not worry about your life.”—Matthew 6:25; “The only thing we have to fear is fear itself”--FDR) Yet, innovation-speak is a language of fear. The Age of Innovation is an Age of Anxiety.

15.  “Innovation policy” arose in the late 1970s amidst concerns about American industrial decline and falling productivity and, especially, the threat of economic competition from Japan. (The many books on Japanese production systems published during the 1980s and 1990s can be read as a collective keening.) The National Cooperative Research Act of 1984, which fostered innovation through the development of government-industry-academic research consortia and protected participating firms from antitrust law, was meant to imitate Japan’s long-existing research consortia.

16.  Japan’s economy faltered by the early 1990s, but we always need to fear an external other. Within a few years, night terrors about China had replaced bad Japan dreams.

17.  In 2005, the National Academy of Sciences published a report, titled Rising Above the Gathering Storm, which argued that the American economy was falling behind in terms of global competitiveness. (Eek! China!) The report especially emphasized the nation’s need to produce more engineers and scientists through university training. Yet, the scientific organizations, university presidents, and corporate executives who wrote the report stood to benefit directly from the policies recommended in it.

18.  The report was led by Norman Augustine, a former Chairman and Chief Executive Officer at Lockheed Martin. In many ways, the report reflected Augustine’s and Lockheed Martin’s interests. But are Augustine’s interests the general public’s interests? After all, as a major defense contractor, Lockheed Martin’s whole business model depends on our fearfulness.

19.  Moreover, some have argued that powerful organizations, like Lockheed Martin, push for more engineers and scientists because increasing the supply will decrease wages. An overproduction of scientists and engineers will mean that they are more beholden than ever to corporations.

20.  Fear extends from the bottom to the top. Anecdotally, economists and business school-types argue that corporate executives read and obsessed over Clayton Christensen’s writings on disruptive innovation not because they wanted to be disrupters but because they so feared being disrupted, that is, having their businesses and industries overthrown.

21.   “Are you feeling disrupted?!?! For three easy payments of $19.99 . . . “

22.  “What we’re really telling people is that if they do not acquire nameless skills of a technological character, they will not have employment. It will be shipped out of the country. So basically it’s a language of coercion that implies to people that their lives are fragile, that is charged with that kind of unspecified fear that makes people . . . it’s meant to make people feel that they can’t get their feet on the ground”—Marilynne Robinson, “A Conversation in Iowa.”

23.  Since the financial crisis of 2008, frightened parents have come to the conclusion that the point of college is to get a good, high-paying job. Large segments of our culture have shifted in this way. In 1971, over 60% of incoming freshmen believed that “developing a meaningful life philosophy” was an important goal. Today, that number has dropped to a little over 40%. In 1971, under 40% on incoming freshmen believed that “being very well off financially” was important. Now that number stands at over 80%. Our society has become more materialistic during the Innovation Age.

24.  Is your kid an innovator? He or she better be or risk being left behind. You know, the best road to innovation is a good education. Hey, you better pay for those expensive test prep classes. Hey, you should probably make sure your kid knows how to code. Hold on. Your kid doesn’t know how to code already?! JESUS CHRIST! You reach for pills to balance the nerves.

25.  News outlets constantly run stories on prevalent diseases that share a major cause: stress.

26.  “Now we are less interested in equipping and refining thought, more interested in creating and mastering technologies that will yield measurable enhancements of material well-being—for those who create and master them, at least. Now we are less interested in the exploration of the glorious mind, more engrossed in the drama of staying ahead of whatever it is we think is pursuing us. Or perhaps we are just bent on evading the specter entropy. In any case, the spirit of the times is one of joyless urgency, many of us preparing ourselves and our children to be means to inscrutable ends that are utterly not our own.”—Marilynne Robinson, “Humanism”

27.  What if we rejected fear, chilled the fuck out, and decided to care for one another?


Transformation

28.  We have transformed important cultural institutions in the name of innovation, and, in the process, we have perverted them.

29.  Innovation is a holistic vision of transformation that includes everything from macro-level visions of economy and society, mid-level ideas about reforming institutions and organizations, and micro-level ideas about reshaping individual human beings.

30.  At the macro-level: In response to the Great Depression, economists created the first meaningful measures of the economy, like Gross Domestic Product. By the late 1950s, however, economists had a puzzle on their hands. Traditional factors, like land, labor, and capital, were unable to express economic growth. In 1957, Robert Solow put forward the theory that the missing factor was technological change. Later studies supported the idea, and over the next thirty years, economists and others vastly advanced our knowledge of how technological change works and how it affects the economy.

31.  On the mid-level: There are whole libraries dedicated to reforming institutions and organizations for the sake of innovation. The scale of transformations at this level varies widely, from whole regions and cities to individual firms and universities.

32.  When Silicon Valley became the place to watch and to be and books started being published on its seemingly magical rise, other places tried to imitate its success via tax policy and subsidization.  An entire scholarly literature arose on “regional innovation systems” or “innovation clusters” or “innovation districts.”

33.  Walk through the business self-help book section in a store or library. Try to find a book that doesn’t contain the word innovation.

34.  Another example of organizational transformation: with its founding in 1950, the primary mission of the National Science Foundation was to fund research that fell outside of industry’s interests, that the drive for profits would leave untouched. Yet, since the 1970s, the NSF has increasingly faced pressure to fund exploitable research, research that will lead to entrepreneurship and innovation. Similarly, the National Institutes for Health now requires those applying for grants to specify how their research contributes to innovation.

35.  Yet, the cultural institution that has been most changed in the name of innovation is the university.

36.  Scholars often divide industrial civilization into a series of technological revolutions. The First Industrial Revolution was centered in England and focused on steam technology, the production of cotton goods, and the rise of the factory system. The Second Industrial Revolution was based primarily in Germany and the United States and it included a wide variety of technologies and new industrial sectors, including the steel, railroad, electricity, chemical, telegraph, telephone, and automobile industries. The Third Industrial Revolution involves many nations and focuses on electronics, computers, the Internet, and digital technologies more generally.

37.  The Second Industrial Revolution was the most impressive technological revolution in human history, and it was built upon the back of a number of organizational changes (innovations?), such as the creation of engineering schools. Yet, social norms required some distance between universities and corporation and the continuation of a traditional model of education. Engineering students, for instance, were still required to take general education classes so that they would be well-rounded. And notions like “pure science” created a barrier between universities and industries: to be a scientist who worked for industry was often to forsake an academic career.

38.  In the innovation age, we have remade universities in the corporate image.

39.  The most famous example of this remaking is the Bayh-Dole Act of 1980. Before this law, you could not patent inventions that arose from federal funding. The reasoning behind this ban made sense: why should you stand to benefit individually from research supported by the public’s tax dollars? But Bayh-Dole changed this rule in the name of fostering entrepreneurship and innovation. Universities became factories for churning out new business ventures. In some academic departments, if you don’t have a startup or two, you are a total square.

40.  Universities changed policies around patenting and the licensing of scientific instruments to become more business-like. (Let your mind take a trip through Philip Mirowski’s dark truthful vision, Science-Mart.)

41.  An entire ecosystem within universities emerged around federal funding. Many researchers have come to live the stressful life where their job position is supported almost entirely by grants. If they don’t bring in grants, they aren’t paid a full salary. In research universities throughout the nation, the most important metric is how much “sponsored research” money a faculty member is bringing through the door.

42.  One of the primary forces increasing the price of college education is the creation of a new class of administrators and executives. Many of these people are charged with the task of turning universities into innovation machines.

43.  On the micro-level: Innovation is a national and partly natural resource: it is rooted in human creativity, which is rooted in cognition, which is rooted in biology and the workings of the brain. The innovation-minded believe that we should remake the national population in the name of fostering technological change. The primary doctrine of faith for this effort is called “STEM Education.” (STEM=Science, Technology, Engineering, and Math)

44.  STEM advocates argue that we should push technical education into lower and lower school grades, that is, onto younger and younger children. Recently, for instance, New York City mayor, Bill de Blasio, announced a plan to offer computer science in all city’s middle and high schools. The intentions are good, but the outcomes are unclear and may be their own kind of hell. The point after all is to render students useful to corporations. As Gabrielle Fialkoff, Director of New York City’s Office of Strategic Partnerships, told reporters, “I think there is acknowledgment that we our students should be better trained for these jobs.”

45.  One of the saddest expressions of innovation madness is so-called STEAM education. Because the arts and humanities have been left out of the STEM equation, advocates argue that the liberal arts generate wealth. They share YouTube videos of Steve Jobs declaring, “It’s in Apple’s DNA that technology alone is not enough—it’s technology married with the liberal arts, married with the humanities, that yields us the result that makes our heart sing.” At this point, advocates for the arts and humanities always look like they are about to feint. “See,” they say, “See. Steve fucking Jobs!!” The “me too, me too!” logic of STEAM talk is pathetic. It forsakes what is best about general education.

46.  The organizational corollaries to STEAM banter are all of the academic units dedicated to “creativity studies” that have opened up around the world. Such units focus on problems at the art/creativity-corporate interface. As Buffalo State’s International Center for Studies in Creativity puts it, “Creativity, creative problem solving, and change leadership play a major role in today’s workplace. Professional success is linked to the ability to master creativity, to operate as a problem solver, to innovate and to lead change.”

47.  Are you a change leader?

48.  The better argument for the arts, humanities, and basic science research (including space exploration) is this one: our society has become obsessed with becoming wealthy—via innovation—but it has forgotten what it means to be rich. A rich society values beauty, pure wonder, and the contemplation of life’s meaning.

49.  The root of our problem is that we treat innovation as a basic value, like courage, love, charity, and diligence. In reality, innovation is simply the process by which new things enter wide circulation in the world. Innovation has nothing to say about whether these new things are beneficial or harmful.

50.  One of the great innovations of the 1980s was crack cocaine. It was a new product that hit the market. And people REALLY wanted it!! What’s more, it opened up new business ventures all over the country. Risk-taking! Entrepreneurship!

51.  In the context of innovation, we must revisit the economist William Baumol’s classic essay, “Entrepreneurship: Productive, Unproductive, and Destructive.” Large swaths of innovative activity have little to do with improving our world. 


Inequality

52.  The innovation age has been an age of increasing inequality.

53.  This correlation isn’t mere coincidence.

54.  Many of the so-called neoliberal policies, like privatization, deregulation, and the lowering of taxes (e.g. “trickle-down economics), that have exacerbated inequality in the United States were, in fact, carried out in the name of entrepreneurship and innovation. Increased capital for the wealthy was to generate new ventures and, ultimately, “job creation.” But here we are: with stagnant wages and what many see as a declining middle-class.

55.  The economist Joseph Schumpeter, the herald of innovation, was a brilliant and sensitive scholar. Schumpeter famously described capitalism’s habit of overturning the old and ushering in the new as “the gale of creative destruction.” But so often in the United States, creative destruction is used to justify American-style unemployment. Industry shuts down; workers are left with little hope. (Consider all of the information and computing technology innovations that have allowed American companies to move manufacturing jobs to other nations.)

56.  Moreover, many innovation policies, like the public funding of research and the creation of business incubators and the like, probably just give resources to people who are relatively well off.

57.  The rise of innovation policy takes place against a larger backdrop and a longer trajectory of social stagnation in the United States. The Civil Rights Movement and the Great Society’s “war on poverty” both crashed upon the shoals of the late-1960s. The lesson appeared to be that social policy was largely a failure. Social problems could not be legislated or administered out of existence. Even quasi-liberals, like Daniel Patrick Moynihan, argued that the Great Society constituted a Maximum Feasible Misunderstanding. The neoliberal intellectual position—embodied in the teachings of writers like Friedrich von Hayek—that economy and society were simply too large and complex to be understood and steered became near dogma.

58.  In the Age of Innovation, the only hope we hold out to the poor is education reform. If we can give impoverished students technical skills, they can find a place within the industrial system. (Hey, maybe we should give each child born into the world a laptop. Hold on. Someone already thought of that.) More profound social changes are hardly even mentioned anymore.

59.  This technical-skills-as-savior motif is common throughout our culture. For example, over the last decade, we have witnessed the rise of the so-called maker movement, a combination of do-it-yourself and hacker subcultures. The maker movement primarily consists of white men patting themselves on the back for being creative. But sometimes the makers have broader fantasies, including opening up maker hubs in centers of poverty. As a leader of a maker center in Nairobi told a reporter, “The crux of the problem is poverty and so something needs to done to address this directly. I hope to do this through the maker education. With these skills, the youth will certainly have a better chance at life.”

60.  Books and articles have conducted several autopsies on a recent debacle: Facebook’s Mark Zuckerberg spent $100 million dollars trying to improve the school system in Newark, New Jersey. Many aspects of the effort were disastrous, and the rest of the results were mixed at best. Through the effort, Zuckerberg “learned about the need for community involvement.” In other words, he learned something that has been a truism in social reform efforts for at least thirty years. Zuckerberg and his fellow Silicon Valley denizens have almost no solutions for problems that have haunted industrial civilization for the last hundred years. (In many cases, we are talking about multi-generational poverty that has gone back to the time of slavery and beyond.)

61.  Echoes of an old nursery rhyme: Mark Zuckerberg, his hype machine, and all of his money could not solve the problems of the Newark public school system.

62.  Here’s an irony for you: One of the most innovative sectors in the last thirty years has been the rise of the private prison industry.

63.  By locking up a lot of black men, we have enriched white prison executives and given jobs to rural white workers.

64.  Silicon Valley is a brutally unequal place. Most localities have an educational bell curve: the majority of residents have some middling level of education, while smaller amounts have either very little education or heaping piles of it. Silicon Valley has an inverted educational bell curve. There are many highly educated people, and many uneducated ones, and almost no one in between. The uneducated tend lawns, care for children, and make skinny lattes for the educated. In other words, the uneducated are servants; the educated are masters.

65.  Much of the hype coming out of Silicon Valley ignores inequality entirely. In 1970, the songwriter and poet, Gil Scott Heron, released the song, “Whitey on the Moon.” Heron decried how—in the midst of the space race with the USSR—policymakers had prioritized putting white men on the moon over caring about longstanding issues, like urban poverty. “A rat done bit my sister, Nel, but whitey’s on the moon.” Today, rich white boy techno gurus, like Elon Musk, fantasize about going to mars, ignoring the impoverished immigrants in their backyards.


The Invisible Visible Hand of Government

66.  Beginning in the mid-1960s, the US federal government used regulations to generate innovation in laggard industries around important social priorities, like safety and pollution control. Since Ronald Reagan’s neoliberal revolution, such regulation has fallen into disfavor, which is not to say that new regulation has disappeared completely. Presidents, including George H. W. Bush and Barack Obama, have created new important regulatory regimes, but they have preferred other methods.

67.  One of those preferred methods has been using federal money to support research, including through the formation of academic-government-industry research consortia. One example is the federally-funded US Advanced Battery Consortium, which was created to help automakers meet the State of California’s mandate for Zero Emission Vehicles. The consortium did research for years, but once California’s push for Zero Emissions Vehicles (read, electric cars) was struck down, automakers used the research little, if at all. They certainly did not fundamentally alter the national population of automobiles in the name of decreasing emissions.

68.  In other words, research consortia have not been nearly as effective as generating socially-beneficial technological change as regulation has. (For example, the Clean Air Act Amendments of 1970 effectively lowered some automotive emissions by over ninety percent.) Without regulatory pressure, industry has little incentive to move knowledge produced through these research ventures into actual products.

69.  We could move towards a post-fossil fuel world if we put our mind to it, if we actually gave a shit.

70.  In general, today’s technological elite obscure the role that government has played in innovation. Scholars, like Mariana Mazzucato and Patrick McCray, have shown, for example, how many of Apple’s products depended on federally-funded research, especially research produced by the US military and the Defense Advanced Research Projects Agency (DARPA).

71.  This mindset reaches its highest point when techies argue that Silicon Valley should secede from the United States because they have it all figured out, because they cannot be bothered to deal with all that has been built before. Talk of secession demonstrates a wild historical ignorance: Silicon Valley would not have become what it is without the needs and demands of the US military.

72.  Obama was right to say, “You didn’t build that.”

73.  In between Twitch viewings, trips to Reddit, and frantic porn consumption, young white men have converged around the philosophy of libertarianism, the belief that government should get out of the way in the name of liberty and free market capitalism. Sometimes this worldview takes the form of “cyberlibertarianism,” the belief that computers, the Internet, and digital technology of all sorts both arose out of freedom and bring freedom wherever they go.

74.  BitCoin, a “cryptocurrency,” is the ultimate cyberlibertarian fantasy, in which government can even be removed from the basic functioning of money.

75.  In 2014, the writers Sam Frank went to California and interviewed cyberlibertarian types there—many of whom were obsessed with topics like artificial intelligence and vastly increasing the length of human life. The geeks Frank interviewed were disciples of a number of gurus around these topics, including Peter Thiel, a co-founder of PayPal and head of the companies Palantir Technologies and Mithril Capital Management. Frank found that Thiel and his ilk “take it on faith that corporate capitalism, unchecked just a bit longer, will bring about an era of widespread abundance. Progress, Thiel thinks, is threatened mostly by the political power of what he calls the ‘unthinking demos.’”

76.  The paragon of this mode of thought is Ray Kurzweil, a technologist who has increasingly come to focus on the “singularity,” a moment, which Kurzweil prophecies will happen around 2045, when machines will surpass human intelligence, creating a near omniscient power that will solve most of our problems. Indeed, since Kurzweil believes we will be able to download our consciousness onto computers by that time, most human problems—the existential issues that have always been with humanity—will simply evaporate. Because the singularity is near—so clearly a secularized version of the Christian apocalypse—and because unfettered capitalism is bringing it into being, there is no need for government.

77.  Kurzweil contributed a number of important inventions early in his career. He also takes somewhere between 100 and 250 vitamins and supplements a day. For sure, he will sell you vitamins and other “longevity products” at his homepage, www.rayandterry.com. From that site: “Science is quickly developing the technologies needed to radically extend the quality human lifespan. Meanwhile, we need to stay healthy long enough to take advantage of these scientific breakthroughs.” Ray Kurzweil, Vitamin Entrepreneur!!

78.  There are exactly two possible reasons why Google’s Larry Page and Sergey Brin hired Kurzweil: A) They are ceaselessly smoking Elon-Musk-on-Mars grade dope beyond our wildest imaginations. B) It’s a cynical ploy to seem cutting-edge and appeal to nerds. (In reality, Kurzweil’s appointment at the company should remind us that foolishness rises to the very top. Google, too, will end.)


Maintenance

79.  Our society overvalues novelty and neglects taking care of what we have. We can build a thing—say, a road or a bridge—but once built, do we have the will to service and repair it?

80.  At its broadest level, maintenance includes all those activities aimed at keeping things going. It is everything that allows us to continue on.

81.  Other thinkers have taken this broad perspective before. For instance, when Karl Marx was formulating his theory of labor-power, he wrote, “The value of labor is equal to the value of the subsistence goods necessary for the maintenance and reproduction of labor.” Now consider the costs of maintaining and reproducing everything else.

82.  Our culture degrades those involved with maintenance and repair. Innovation is for the great ones. Taking care of what already exists is for losers, burnouts, slackers.

83.  Education is about social reproduction—in this view, a form of maintenance. Yet, think about how American culture values grade and high school teachers and how little we pay them. Recall all of the vile sayings we have about such people. “Those who can’t, teach.”

84.  Similarly, the current fight over fast food workers’ wages is, in part, an argument for the dignity of being a maintainer.

85.  In his book, Technology’s Storytellers, and other works, the Jesuit priest John Staudenmaier argues that our stories about technology are deeply interwoven with what he calls “technological style,” or the relationship between a designer’s mindset and values and a constructed artifact or system. Of technological style, Staudenmaier writes, “Because a technological design reflects the motives of its designers, historians of technology look to the values, biases, motives, and worldview of the designers when asking why a given technology turned out as it did. Every technology, then, embodies some distinct set of values. To the extent that a technology becomes successful within its society, its inherent values will be reinforced.”

86.  The official technological style of our culture is embodied in TED Talks and digital technology—envision pornography produced by Apple: cool hues, white and silver, everything soft lit, people in hoodies, precisely the mise-en-scène of films like Ex Machina.

87.  But if we look deeper, we see that our real technological style is dilapidation.

88.  Our technological values are best embodied by collapsing buildings, rotting bridges, and abandoned, trash-strewn lots. It is the physical and infrastructural outcome of “creative destruction.” Throughout the nation, de-industrialized, Rust Belt cities molder.

89.  If you want to see who we are, go to Detroit.

90.  Every year the American Society of Civil Engineering publishes a report card on American infrastructure, and every year American infrastructure receives low marks. For sure, this professional society has incentives to play up infrastructural problems. If maintenance and repair spending go up, civil engineers have more work. (Imagine if an organization called something like the Dental Hygienists of America published a report finding that the single most important factor for making a good first impression was shiny, white teeth.) But we can also see the truth of the ASCE’s report cards. Everything around us is in shambles. A great infrastructural building boom extended from the New Deal through at least through the 1950s. But now these old creations look tired. Rode hard, put away wet.

91.  Scholars who study infrastructure often say that it is “invisible.” From one perspective, such claims are melodramatic claptrap. “How can this bridge be invisible? I’m looking right at it.” But invisible is also a moral term having to do with what we avoid, what we are too embarrassed to fix our sight on. For instance, we could say that the homeless are invisible. When we pass them, we look away. Infrastructure and the poor belong to a massive shadow nation that haunts our country, a nation called “Our Shame.”

92.  Our devaluing of maintenance and our neglecting of infrastructure find their ironic exemplars in “conservative” politicians, like New Jersey Governor Chris Christie. Since the writings of Edmund Burke, the goal of conservatism has been conserving our values, taking care of all that we have inherited. (The image is that we should give the dead a seat at the metaphorical table of deliberation.) But Christie neglects even the conservative tradition. To his mind, conservatism simply means “don’t raise taxes.” You can see Christie’s pudgy face in each of the state’s innumerable potholes. 


The Future

93.  We will know that our society has turned a corner when our leaders become embarrassed to stand at the pulpit and sermonize about innovation. The audience already knows these words are hollow. But as usual, our leaders are deaf.

94.  Perhaps we already see this change underway. Smart speakers know that if they say “innovation” their listeners will burn red with embarrassment and guffaw behind their backs.

95.  The fall of innovation-speak will be a chance to reorient our society around values that actually matter. Will we seize this opportunity? Or will we allow corporate executives and other elites to seduce us with another wave of shiny, sparkling nonsense? The most radical thought is that there are principles beyond usefulness, beyond utility.

Nietzsche, Bourgeois Anti-Modernist of the Second Industrial Revolution

This is a talk I'm giving at the conference, Nietzsche, Science, and Technology, which is co-organized by the Nietzsche Circle and Stevens Institute of Technology.

Friedrich Nietzsche was a bourgeois anti-modernist of the Second Industrial Revolution. Like many others of his time, he believed that the rise of industrial capitalism and the modern nation state was leading to decadence and moral decline. Nietzsche isn’t nearly as insightful as his defenders like to pretend, and because his ultimate solution to the problems of modernity is so morally debased, we should leave him in the dustbin of history. That is what I will argue today.

From the beginning to the end of his career, Friedrich Nietzsche’s approach was historical and quasi-sociological, yet few (English language scholars at least) apply this same perspective to Nietzsche himself. It’s the folly of philosophers. As Nietzsche wrote, “You ask me which of the philosophers’ traits are really idiosyncrasies? For example, their lack of historical sense, their hatred of the very idea of becoming . . . . They think they show respect for a subject when they de-historicize it, when they take the ‘view from eternity’—when they turn it into a mummy.”[i] When philosophers do try to take the historical view, their efforts are ham-handed, the work of individuals who have neither the instinct, nor the sensitivity, nor the training necessary for the insightful historical work. Julian Young’s 2010 Nietzsche biography is a perfect example.[ii] While much in Young’s biography is fine—and I think Young’s theory that Nietzsche is a kind of communitarian is provocative and fruitful, if flawed—the book only rarely succeeds in properly situating Nietzsche in his historical context. The book’s subtitle, A Philosophical Biography, tells you all you need to know: it’s the biography of a mummy.

            Young’s book suffers from another flaw that runs throughout Nietzsche studies: it is far too worshipful. Young wishes to defend and save his master, rather than turning against him. It’s a basic irony: in Zarathustra, Nietzsche’s allegory of the “Three Metamorphoses” of the camel, the lion, and the child enshrines revolt against our teachers. In these stages, the camel as beast of burden carries the heavy load necessary to begin self-development. The lion, however, bites the hand that has fed him. “He seeks out his last master; he wants to fight him.” Only once this battle is won can the child emerge to “create new values.”[iii] Ah. But Nietzsche studies lack creative children. Nietzsche told us to overthrow masters, but Nietzsche scholars everywhere, like beasts of burden, carry his water.

            No. What we need is a historian who will eviscerate Nietzsche as Nietzsche did Wagner and so many others—to free us from the burdens of this “Anti-Christ.” That historian is not me. I am an Americanist and a historian of science, technology, and business. I lack the language skills and training in the history of German politics, society, science, music, and letters that would be necessary for a full critique. If Nietzsche could fantasize about his “Blond Beast,” I can envision my own: I think she’ll likely be a brilliant, bodacious historian of science, and she’ll leave Nietzsche gutted and twitching on the floor right where she finds him. Today, I only want to point the way. Call it a “Prelude to the Philosophy of the Future.”

In my remarks, I will rely on one of Nietzsche’s core methods—that is, reading people unfairly. For Nietzsche was one of the most unfair and least charitable readers in history. The number of writers who Nietzsche badly read is truly mind-boggling. I will repay him the favor.

            The core of my argument will involve applying a Nietzschean genealogy to Nietzsche and his fans. Such an approach finds master and disciples to be a form of the Hegelian “beautiful soul,” perhaps best described as the pale angry bedroom dweller or the hostile nerd—a wallflower who judges life’s active participants to be members of the herd. I can level this criticism because I was one of these people. When I was 16 years old, I watched the coming-of-age film, Clueless, while on Christmas vacation with my family in sunny Florida. I identified strongly with the older brother character played by Paul Rudd, an angst-filled, existentialist type who wore black, whose every word reeked of sarcasm. In one scene, Rudd’s character was laying by a sun-drenched poolside reading a book. I wondered what it was. Through the help of a high school teacher, I discovered that it was the Walter Kaufman edited volume, Basic Writings of Nietzsche. Discovering Nietzsche changed my life. It began a trajectory that ultimately led to me becoming a professor. Nietzsche also made a perfect compliment to Nine Inch Nails, Rage Against the Machine, and the other anger and angst-ridden musical acts that I adored at the time. I could lay in my bedroom and judge the rest of the world. I thought most other people were idiots. I especially loathed the evangelical Christian kids who were dumb enough to go the Christian Youth Center in my town. What tools! I thought: probably I’ll be an Ubermensch. You know, maybe not, but probably.

            To make this a little more academic, I will argue that Nietzsche was a well-known type from this period, which scholars refer to as the bourgeois anti-modernist. In his book, No Place of Grace: Antimodernism and the Transformation of American Culture, 1880-1920, Jackson Lears examines the history of anti-modernist movements in the United States, including the Arts and Crafts moment, the return of the martial ideal, the fascination with Medievalism, a turn towards Catholic art and spirituality, and several others. Like Nietzsche, the members of these movements came from bourgeois families. After completing a table examining the backgrounds and beliefs of about seventy anti-modernists, Lears concluded that the mindset “was most prevalent among the better educated strata of the old-stock ruling class.”[iv] Anti-modernists, then, were bourgeois conservatives alienated from the changing world of industrial capitalism. Summarizing this worldview, Lears writes, “This was the vision which haunted the antimodern imagination: a docile mass society—glutted by sensate gratification, ordered by benevolent governors, populated by creatures who have exchanged spiritual freedom and moral responsibility for economic and psychic security.”[v] You could hardly imagine a more Nietzschean sentence.

            Yet, we needn’t use anti-modernism as an analyst’s category as if Nietzche wouldn’t have understood the term. He states his anti-modernist intentions explicitly! As he writes of Beyond Good and Evil in Ecce Homo, “This book is in all essentials a critique of modernity, not excluding the modern sciences, modern arts, and even modern politics, along with pointers to a contrary type that is as little modern as possible—a noble, Yes-saying type.”[vi] The point is that he was part of a herd of people making such declarations. The author of the Untimely Meditations was himself very timely.

            The frustrating part is that Nietzsche’s rootedness in his time has been widely recognized in history and political science. For example, Fritz Stern’s 1961 book, The Politics of Cultural Despair: A Study in the Rise of Germanic Ideology, argued that Nietzsche was a part of what Stern called a “conservative revolution,” which sang “a rhapsody of irrationality, denouncing the whole intellectualistic and scientific bent of German culture, the extinction of art and individuality, the drift towards conformity.”[vii] The enemies of these conservatives were industry, democracy, liberalism. According to Google Scholar, Stern’s book has been cited 823 times, but as far as I have been able to find only once in a study of Nietzsche, a 1991 essay by John Bernstein, and Bernstein’s essay has only been cited 30 times. It was never taken up in the wider Nietzsche literature.[viii] Kaufman and Schacht, for instance, never bother to address Stern’s work. The anti-historicism of Nietzsche studies goes much deeper. In his 2014 book, After Hegel: German Philosophy, 1840–1900, Frederick Beiser examines four controversies that marked German philosophy during this period: the search for the identity, or future, of philosophy; the question of materialism; worries about the limits of knowledge or the constant presence of ignorance; the development of historicism; and the rise of Kulturpessimismus, or cultural pessimism. Nietzsche inherited all of these problems and strains, and yet Nietzsche fans so often embrace his claims to radicalness and originality.

I could go on like this for ages. But why bother? Pointing out that Nietzsche was a product of his age does not knock him off his horse. It only deflates his claims to originality. In the time remaining, I would like briefly to discuss Nietzsche’s central problematic during his mature writings, namely the problem of nihilism. The problem itself was widely shared during the period Nietzsche was writing. Then I would like to examine Nietzsche’s central answer to the problem of nihilism—and this reflection will bring me to why we should reject Nietzsche as a forbearer.

From his early writings, such as “On the Future of Our Educational Institutions” and the Untimely Meditations, through his last scribblings, Nietzsche always decried the twin-headed beast of industrial capitalism and the centralized state—the two forces that were, for instance, remaking his beloved schools and universities. Industry and the state were unified in the person and efforts of German Chancellor Otto von Bismarck, a conservative, anti-democrat of whom Nietzsche was no fan. Yet, Nietzsche everywhere bemoans the role that democracy and the push for equality is playing in undoing society, including by leveling institutions of learning. “What conditions the decline of German culture?” Nietzsche asks. “That ‘higher education’ is no longer a privilege—the democratism of Bildung, which has become ‘common’—too common.” When compounded by secularization and the Death of God, Nietzsche, as we all know, believed that Germans were sliding towards what he called the “Last Man,” that society was falling into what Nietzsche in his notebooks called “nihilism.” This then is Nietzsche’s famous problem: given contemporary society’s decline into nihilism, how do we save Culture?

If I had more time today, I would outline an extensive critique of Nietzsche’s theory of value that undergirds this problem. I think he’s just wrong about how human values work. Part of the problem is that, like a lot of people in the long 19th century, including Dostoevsky and William James, Nietzsche believed that secularization would inevitably lead to a cultural crisis. This belief was understandable but basically wrong, and its assumptions were founded on a basically masculine understanding of ethics and morality, which asserts that moral action is based on abstract principles. Feminist theorists, like Carol Gilligan, just have a better understanding of human action as being grounded in an “ethics of care,” which fundamentally has to do with our place in networks, and webs, of other humans. In Nietzsche’s time as well as today, if you ask people what they value, they will give you mundane answers, like family, work, leisure, whatever. And when we watch over the course of two or three generations, a family go from being quite religious to being completely irreligious, the resulting agnostics, tepid atheists, or secular humanists have completely normal notions of right and wrong. Now you can say that today’s secular humanists are instances of Nietzsche’s “last man,” and I often like to tease my students that when they sit around for hours in their boxers, playing online roleplaying games, chowing down on Doritos, and slamming Mountain Dew, they are precisely the thing that HORRIFIED Nietzsche so greatly. But really, what kind of prick do you need to be to judge other humans in this way? Well, you can be an angry bedroom dweller, a 16 year old boy who lays in his stained sheets listening to Joy Division and Einstuzende Neubauten. (I’m always amazed by academics who will use language like herd, mob, masses, and then turn around and do something very herd-like. For instance, they will brag about not watching television, a true herd animal technology, not realizing that not-watching-television is a classic trait of belonging to a certain self-important academic culture. In other words, there is no one herd. There’s a bunch of different herds. It’s herds all the way down.)

Nietzsche, like Durkheim and Freud, adheres to a mode of social explanation that the sociologist John Levi Martin calls “sociopathic epistemology” because it has an “implicit dismissal of explanations that might be offered by those people whose actions we are studying.”[ix] Nietzsche believes that he can understand others better than they themselves can and that he can understand them from the confines of his bedroom. He doesn’t need to ask what they think. This position leads to some insane interpretations. For instance, he argues that men become Trappist monks because they are “too weak willed, too degenerate, to be able to impose moderation on themselves,” that is, “radical means are indispensible only for the degenerate; the weakness of the will . . . is itself merely another form of degeneration.”[x] But isn’t a simpler interpretation that—given both belief in a metaphysical God and a theology that values prayer as the highest possible form of activity—spending a life in little but prayer was seen as a great honor? Moreover, many of Nietzsche’s interpretations of others’ actions depends on him being able to suss out “instincts” and “drives” that lie behind and under regions and yet are completely invisible to the naked eye. Furthermore, given that many of the people that he discusses have been dead for hundreds if not thousands of years, how can he feel so confident talking about their “instincts”?

Nietzsche’s theory of value sucks, but his answer to the question of nihilism is much more troubling. Given that democracy, capitalism, and the priorities of the herd are leading to nihilism, what is the answer? In Beyond Good and Evil and other works from Nietzsche’s late period, the answer is fairly straight-forward: it involves creating a new race or caste that will ensure the dominance of noble culture. In Section 208 of Beyond Good and Evil, Nietzsche bemoans the “absurdly sudden attempt at a radical mixture of classes, and hence races” which leads to a sickness of the will, and this sickness further leads skepticism and the “introduction of parliamentary nonsense.” Nietzsche then hopes out loud that Russia will become a political and military threat to Europe so that Europeans “would have to resolve to become menacing, too, namely, to acquire one will by means of a new caste that would rule Europe, a long, terrible will that would be able to cast its goals millennia hence.”

Note the complex intertwining here: races, classes, and castes are one and the same. Races physiologically contain bundles of instincts and drives expressed in the will. And physiology produces philosophical views. As Nietzsche writes in that same section, “For skepticism is the most spiritual expression of a certain complex physiological condition that in ordinary language is called nervous exhaustion and sickliness; it always develops when races or classes that have long been separated are crossed suddenly and decisively.” The only answer is the creation of a new nobility. In section 251 of the same book, Nietzsche argues that Jews should be genetically accommodated into the German race because Jews are “the strongest, toughest, and purest race now living in Europe.” After outlining this accommodation scheme, Nietzsche writes, “I am beginning to touch on what is serious for me, the ‘European problem’ as I understand it, the cultivation of a new caste that will rule Europe.”

Because Nietzsche was fundamentally a Lamarckian and believed that acquired traits could be passed down from parent to child, physiology was a complex product of the interaction of genetics, culture, and personal development. Yet, while Nietzsche’s polemics were meant to rouse individuals out of their slumber, it was only this future caste—a population of individuals spread over multiple generations—that Nietzsche believed could stave off nihilism. As he wrote in Twilight of the Idols, “The beauty of a race or family” is “the accumulated work of generations. . . . The law holds that those who have [good things] are different from those who acquire them. All that is good is inherited; whatever is not inherited is imperfect, is mere beginning.”[xi]

My interpretation here accords with the vision of Nietzsche as a “philosophical naturalist” that has dominated Anglophone Nietzsche studies for the last fifteen years.[xii] More and more, philosophers have focused on how Nietzsche took up Darwin, Lamarck, and other sciences of his day, especially physiology. The greatest gift for me while writing this paper has been discovering Brian Leiter’s writings on Nietzsche’s naturalism. I’m sure I’ll be reading Leiter for a long time. I agree with Leiter that Nietzsche was not writing “political philosophy,” and that people who talk about Nietzsche’s political philosophy are often doing a great disservice to his work. Nietzsche has almost nothing to say about political arrangements, and he usually has nasty things to say about the state.

But what I think Leiter misses is that Nietzsche fits into a larger trend in German society—which saw the state and formal politics as the enemy of Culture. As Wolf Lepenies writes in his book, The Seduction of Culture in German History, quoting from Norbert Elias’s book, The Germans, “embedded in the meaning of the German ‘culture’ was a non-political and perhaps even anti-political bias symptomatic of the recurrent feeling among the German middle-class elites that politics and the affairs of the state represented the area of their humiliation and lack of freedom, while culture represented the sphere of their freedom and their pride. . . . [Eventually] this anti-political bias was turned against the parliamentary politics of a democratic state.”[xiii] Fittingly Nietzsche’s new caste was not rooted in politics, but it was a caste, a new nobility and aristocracy, and Nietzsche was also known to hold the false belief that he himself was a descendent of Polish nobility (something even his crazy sister wasn’t foolish enough to buy into). Nietzsche’s new caste would require social hierarchy, including famously slavery. But really Nietzsche’s vision of nobility involved preserving hierarchies of all types. As he writes in section 239 of Beyond Good and Evil, “Wherever the industrial spirit has triumphed over the military and aristocratic spirit, woman now aspires to the economic and legal self-reliance of a clerk,” that is, to “defeminize herself.”

Nietzsche’s naturalistic viewpoint that reaches maturity in Beyond Good and Evil continues throughout his career. I do not have time to go into great detail but only to make a few observations. On the Genealogy of Morality describes how a culture fell into slave morality over the course of generations and how this decline reshaped their physiology. The book was a sequel to Beyond Good and Evil, and it fits hand-in-glove with the vision of race and caste spelled out in that earlier work. Moreover, On the Genealogy of Morality was in part a response to Paul Ree’s Darwinian treatise The Origin of Moral Sensations, which was based in the thinking of Herbert Spencer. And On the Genealogy of Morality presents Lamarckian response to Ree—attacking Ree’s and Spencer’s assertions that morality is about fitness and usefulness, Nietzsche holds forth that it is about power. Finally, at the end of the book’s first essay, Nietzsche proposes an academic prize competition for essays on “historical studies of morality,” and he believes that such histories “require first physiological investigation and interpretation, rather than a psychological one.” He goes on “Something, for example, that possessed obvious value in relation to the longest possible survival of a race . . . would by no means possess the same values if it were a question, for instance, of producing a stronger type.”

            Brian Leiter nicely takes apart Michel Foucault’s effort to turn “genealogy” into an abstract form of cultural critique. But what I do not understand yet is why we should not reject the metaphorical reading of Nietzsche’s notion of genealogy and instead see Nietzsche’s moral history as literal genealogy—that is, as a recounting of biological lines. Moreover, the notion of biological lineage played a role in other works that Nietzsche was writing during this period. For example, in the Book Five of The Gay Science (added in 1887), Nietzsche argues that human consciousness arose from the need for communication. He believes that this need builds up through the ages. “It does seem to me as it were that way when we consider whole races and chains of generations: Where need and distress have forced men for a long time to communicate . . . the ultimate result is an excess of this strength and art of communication.”[xiv]

Nietzsche’s focus on noble and degenerate races does not end in On the Genealogy of Morality but continues right through to the end. In Twilight of the Idols (1888), Nietzsche argues that Greek nobility rejected philosophical dialogue (dialectics) because “they were considered to be bad manners, they were comprising.”[xv] He goes on, “Honest things, like honest men, do not carry their reasons in their hands like that. It is indecent to show all five fingers. What first must be proved is worth little.” The success of Socrates and Plato, then, only came because the Greek noble class was degenerating. “Old Athens was coming to an end.”[xvi] Socrates and Plato were “symptoms of degeneration, tools of Greek dissolution,” Nietzsche writes. Socrates and Plato did not agree philosophically because they were correct but rather because they “agreed in some physiological respect, and hence adopted the same negative attitude to life—had to adopt it.”[xvii] That is, because beliefs are the product of instincts and drives, Socrates and Plato are on the same page because their biological urges align. Nietzsche writes, “Socrates’ decadence is suggested not only by the admitted wantonness and anarchy of his instincts, but also by the hypertrophy of the logical faculty and that sarcasm of the rachitic which distinguishes him.”[xviii]  Moreover, Socrates, with his love of dialectics, “belonged to the lowest class.” How do we know this? Because Socrates was ugly. “Ugliness is often enough the expression of a development that has been crossed . . . . or it appears as declining development (that is, degeneration). The anthropologists among the criminologists tell us that the typical criminal is ugly: monster in face, monster in soul.”[xix] To summarize, Nietzsche saw the rise of Socratic philosophy as a battle between racial types.

To give one more example from Twilight of the Idols: in the section “The Labor Question,” Nietzsche writes, “The stupidity—at bottom, the degeneration of instinct, which is today the cause of all stupidities—is that there is a labor question at all. . . .The hope is gone forever that a modest and self-sufficient kind of man, a Chinese type, might here develop as a class. . . . If one wants an end, one must also want the means: if one wants slaves, then one is a fool if one educates them to be masters.”

With that, allow me to pivot back to Nietzsche’s historical context and conclude. Germany in Nietzsche’s time was undergoing a transformation that we now call the Second Industrial Revolution, the most dramatic technological revolution in human history, including the creation of the railroad, steel, electrical, chemical, pharmaceutical, telegraph, telephone, and automobile industries. I am not naïve. A lot of this revolution was truly terrible in terms of human costs, and some of its consequences, like climate change, we still have no idea how to address. Yet, by almost any measure imaginable, today more humans on Earth have a higher quality of life than any time in history precisely because of this transformation. Moreover, the exact changes in German universities that Nietzsche attacked in his works were an essential factor in creating the capacity for knowledge production that led to this revolution. Which is why American universities including Stevens copied that model. The Bismarck government gave birth to social welfare as we know it, a hallmark of humane existence. And the social movements and struggles that led to a partial and temporary humanizing of capitalism arose from fellow-feeling and class consciousness that Nietzsche would have mocked as the weakness of pity, the rule of the herd. Because Nietzsche lived in his bedroom and refuse to read the newspaper, he understood none of this. If you want to be a cultural pessimist, like Nietzsche or Wittgenstein, and believe that Western culture has declined since 1850, tell me in specific terms what we have lost.

            Finally, my talk today has had a certain tone, but I did not start off down this road. Rather, I wanted to re-read Nietzsche and discover what he has to teach us about living with technology. I think the answer is not much. That is, nothing that couldn’t be placed on an inspirational poster in a dentist’s office. “If you’re going to use Twitter, use it like an Overman, not like a Last Man.” I think it’s telling, however, that one place we see a lot of discussion around Nietzsche is around transhumanism. I think it’s telling because the technologies that would allow us to truly transcend our existence as homo sapiens are still quite a ways off. Recently, when Chinese scientists edited genes, it led to unplanned mutation and death. We have no idea how to move forward. What this means is that a lot of smart people are basically sitting around and thinking about Nietzsche and science fiction instead of focusing on the real suffering that is happening in our world right now. In other words, to this day, instead of drawing us closer to reality, Nietzsche’s spirit haunts

 

[i] Twilight of the Idols, “’Reason’ in Philosophy,” 1. All numbers given for Nietzsche’s texts are for section numbers, not page numbers.

[ii] Friedrich Nietzsche: A Philosophical Biography (Cambridge University Press, 2010).

[iii] Zaruthustra, “Three Metamorphoses.”

[iv] No Place of Grace, 313.

[v] Ibid., 300.

[vi] (Section 2—see also his insistence that after Zaruthustra’s “Yes-saying,” Beyond Good and Evil initiated a “No-saying, No-doing” streak, “the reevaluation of our values so far.)

[vii] Stern, xii.

[viii] John Andrew Bernstein, “Nietzsche’s Moral Philosophy,” International Journal for Philosophy of Religion 29 (1) 1991: 55-6.

[ix] John Levi Martin, The Explanation of Social Action, 6.

[x] Twilight of the Idols, “Morality as Anti-Nature,” 2.

[xi] Twilight of the Idols, “Skirmishes of an Untimely Man,” 47.

[xii] My interpretation is also informed by Malcolm Bull’s Anti Nietzsche, a provocative but highly problematic work. In that book, Bull really criticizes how the poststructuralists and others have tried to make Nietzsche a friend to leftist and progressive causes since the 1960s. But, like Heidegger, Bull relies too heavily on Nietzsche’s notebooks that were later published as The Will to Power. I do think we should read and interpret the notebooks, but I’ve always questioned any interpretation of Nietzsche’s works that is primarily based on the unpublished works. For this reason, I am only going to focus on published works, especially Beyond Good and Evil, The Genealogy of Morality, and Twilight of the Idols.

[xiii] The Seduction of Culture in German History, 4.

[xiv] The Gay Science, 354.

[xv] “The Problem of Socrates,” 5

[xvi] Ibid., 9.

[xvii] Ibid., 2.

[xviii] Ibid., 4.

[xix] Ibid., 3.

Taylor's World Pt. 2: F. W. Taylor's Expanding Social Networks

This is a guest blog post written by Margaret "Amy" DiGerolamo . Today is the second and final day of Stevens Institute of Technology's Taylor's World conference on the life and legacy of the "Father of Scientific Management" Frederick Winslow Taylor. This year marks one hundred years since Taylor's death. Taylor's personal papers, furniture, and other objects have been in the Stevens archives for decades, and this conference is a fitting way to mark the potentials of this scholarly resource.

This past summer, in anticipation of the conference, I began a multi-year research project along with two undergraduate students in the Stevens Scholars program, Margaret "Amy" DiGerolamo and Daniel Wojciehowski. Amy and Dan have entered about a thousand pieces of Taylor's correspondence into a database, including multiple pieces of metadata (such as to, from, the subject of the letter, addresses, and the industry under discussion). Once the database is complete, we will make it public as well as use it for research and educational purposes at Stevens.

As part of their research, the students also completed a personal research project. The students were already doing a great deal of work, so I did not make them wade into secondary literature, so these studies could doubtlessly be better connected to existing work. But what we see here is young, smart minds beginning to think through the social scientific study of technology and society.

Amy traced how the character of Taylor's social networks changed over time by tracing the nature of the organizations that contacted him and/or invited him to give lectures. Unsurprisingly, she finds that his social world expanded a great deal from his beginnings in mechanical engineering. Fascinatingly, however, Amy describes how Taylor bridled and put up resistance when his colleagues sought to form a special organization dedicated to Scientific Management, an organization that eventually became the Taylor Society.

Frederick Winslow Taylor was an influential thinker of the early 20th century. While he frequently won others over, he also was very specific—to the point of stubbornness—about how others should follow his method and about what his ideas entailed. . For example, Taylor started his notable work while he was an engineer at the Bethlehem Steel Company, and he was the co-creator of the Taylor-White Process, which created a harder, more effective tool. When the Committee of Science and Arts in the Franklin Institute was awarding Taylor with the Elliot Crescent Medal, Taylor felt compelled to send them a letter because the write-up for the award insinuated that his discovery may a have been more of an accident than a calculated experiment. The letter requested that they correct their wording of that sentence (Taylor, 1902).  Even though Taylor was difficult at times, this attitude eventually allowed him to spread and enforce his new principals of Scientific Management.

            As Taylor’s career progressed from the invention of the Taylor-White Process, he started undertaking a revolution in the way that the industrial world worked. He explored how to increase productivity by using workers’ time more efficiently. Instead of the old process of one worker taking on the production from raw materials to final product, he split up the work so that one worker would be in charge of one part of the process, and, in turn, the process would go much faster.

            This was a giant leap from the way that factories had been run, and trying to prove that this was the right way was a large challenge for Taylor. Therefore, he was forced into a systematic process of disseminating these ideas. Part of this process included speeches about Scientific Management. Taylor would not speak on the terms of others. He would only agree to speak at a, “considerable length, because a short address leaves people antagonistic instead of friendly towards Scientific Management” (Taylor, 1914). Taylor would not give talks unless allotted at least two hours.

            Besides needing a large amount of time to speak, Taylor was also very strategic when it came to his professional organizations in which he participated. Logically, Taylor started off in societies focused on mechanical engineering, and from there he extended his involvement into organizations that were further away from his original area of expertise. He moved from just engineering societies to societies focus on a range of topics like education, philosophy and history. After establishing his concept of Scientific Management, Taylor wanted to spread his ideas and the best way to do that was to get involved in societies of varying focuses. 

            Taylor saw the potential to apply his focus on efficiency from Scientific Management to many areas. In the education field, he collected data from different colleges to analyze the efficiency of different physics classes (Taylor 1909). He was consulted on the best way to test cost versus effectiveness of classes. He took great interest in the US Navy Yards , which created significant as he moved into government work (Taylor 1909). The reach of Taylor’s ideas expanded to philosophy, history, and psychology. The American Philosophical Society asked Taylor to speak about Scientific Management and moving pictures (Keen 1913), and he was also invited to join the Historical Society of Pennsylvania as he was considered a “most prominent citizen” (Keen 1912). At one point the Society of Applied Psychology sent him a booklet entitled, “Attainment of Mind Control” (The Applied Psychology 1914). Scientific Management was a concept that crossed over many different fields of interest because at its core it was just about efficiency and people.

Amy DiGerolamo created this table using the database of Taylor's correspondence. The dark colored organizations are close to Taylor's roots in mechanical engineering. The light ones are more distant, such as the Historical Society of Pennsylvania. W…

Amy DiGerolamo created this table using the database of Taylor's correspondence. The dark colored organizations are close to Taylor's roots in mechanical engineering. The light ones are more distant, such as the Historical Society of Pennsylvania. What Amy finds is that Taylor resisted the society at the bottom, which he saw as a withdrawal into a private world of Scientific Managers, rather than influencing others.

            Taylor was also active in the broader field of engineering, especially through participation in the American Society of Mechanical Engineers. He believed that societies that were too specific were not as effective. For example, when he was approached by the American Society for Promoting Efficiency, he not only refused to be a part of the society, he did not want to be associated with the group at all (Taylor 1911).

His method was challenged again when his colleagues decided that Scientific Management should have a society for its own. Taylor believed that the best forum for the continued expansion of Scientific Management was the American Society of Mechanical Engineers. Other prominent figures in the field of Scientific Management disagreed. They believed that the American Society of Engineer’s, “decided that the greater service would be rendered by emphasizing pure engineering, and consequently study and discussion of management found its opportunity restricted,” (Brown 1925). This group of men included James M. Dodge, Frank B. Gilbreth, Robert T. Kent, Conrad Lauer, Carl G. Barth, Morris L. Cooke and H. K. Hathaway. They began meeting regularly as the Society to Promote the Science of Management.

Taylor, on the other hand, wanted to focus on the American Society of Mechanical Engineers. He believed that it would be more productive to convince this large group of individuals to follow the ways of Scientific Management than it would be to meet with people who are already advocates of the practice (Taylor 1910). Taylor wanted nothing to do with the Society to Promote the Science of Management in the beginning. He refused to look through the constitution that Sanford E. Thompson sent him (Taylor 1911). Taylor fought its formation and then refused to be associated to it, until it was pointed out that regardless of whether he joined, the fate of the society was connected to the fate of the concept of Scientific Management (Thompson 1911). Therefore, he eventually had limited involvement and accepted an honorary membership in the society (Taylor 1914).

Over the years, this story has been muddled. People often assume that Taylor was an advocate for this society, especially after it was renamed the Taylor Society posthumously. Some historians misinterpret his reluctant surrender as support, but Taylor was clear that he was not at all supportive. He only joined to defend his legacy.

In the end, Taylor’s overall strategy was still extremely successful. Through his lengthy speeches and the broad dissemination of ideas through various societies, Taylor laid a foundation that made Scientific Management a key ideal in the industrial world. As he managed his relations with others and his position within larger social networks, the made himself the “Father of Scientific Management.”


 

 

Bibliography

Frederick W. Taylor to A. H. Blanchard, 13 November 1914, Box 1, Folder 5C (24), Frederick Winslow Taylor Collection, Stevens Institute Archives, Samuel C. Williams Library.

Frederick W. Taylor to H. F. J. Porter, 6 November 1911, Folder 5F (5), Frederick Winslow Taylor Collection, Stevens Institute Archives, Samuel C. Williams Library.

Frederick W. Taylor to Henry L. Gantt, 11 November 1910, Folder 6L (9), Frederick Winslow Taylor Collection, Stevens Institute Archives, Samuel C. Williams Library.

Frederick W. Taylor to Henry S. Pritchett, 24 March 1909, Box 1, Folder 5H (10), Frederick Winslow Taylor Collection, Stevens Institute Archives, Samuel C. Williams Library.

Frederick W. Taylor to Henry S. Pritchett, 19 April 1909, Box 1, Folder 5H (12), Frederick Winslow Taylor Collection, Stevens Institute Archives, Samuel C. Williams Library.

Frederick W. Taylor to Sanford E. Thompson, 2 May 1911, Folder 6L (7), Frederick Winslow Taylor Collection, Stevens Institute Archives, Samuel C. Williams Library.

Frederick W. Taylor to William H. Wahl, 26 August 1902, Box 1, Folder 5M (18), Fredrick Winslow Taylor Collection, Stevens Institute Archives, Samuel C. Williams Library.

Gregory B. Keen to Frederick W. Taylor, 13 March 1912, Folder 5P (3), Frederick Winslow Taylor Collection, Stevens Institute Archives, Samuel C. Williams Library.

H. S. Person to Frederick W. Taylor, 28 October 1914, Folder 6L (25), Frederick Winslow Taylor Collection, Stevens Institute Archives, Samuel C. Williams Library.

Percy S., Brown. "The Work and Aims of the Taylor Society." Annals of the American Academy of Political and Social Science in Modern Industry 119 (1925): 134-35. Accessed July 19, 2015. http://www.jstor.org.

Sanford E. Thompson to Frederick W. Taylor, 21 April 1911, Folder 6L (13), Frederick Winslow Taylor Collection, Stevens Institute Archives, Samuel C. Williams Library.

The Applied Psychology Press, 1914, Folder 6K (1), Frederick Winslow Taylor Collection, Stevens Institute Archives, Samuel C. Williams Library.

W. W. Keen to Frederick W. Taylor, 28 February 1913, Folder 5E (21), Frederick Winslow Taylor Collection, Stevens Institute Archives, Samuel C. Williams Library.

Taylor's World Pt. 1: Training in Frederick Winslow Taylor's Social Networks

This is a guest blog post written by Daniel Wojciehowski. Today is the first day of Stevens Institute of Technology's Taylor's World conference on the life and legacy of the "Father of Scientific Management" Frederick Winslow Taylor. This year marks one hundred years since Taylor's death. Taylor's personal papers, furniture, and other objects have been in the Stevens archives for decades, and this conference is a fitting way to mark the potentials of this scholarly resource.

This past summer, in anticipation of the conference, I began a multi-year research project along with two undergraduate students in the Stevens Scholars program, Margaret "Amy" DiGerolamo and Daniel Wojciehowski. Amy and Dan have entered about a thousand pieces of Taylor's correspondence into a database, including multiple pieces of metadata (such as to, from, the subject of the letter, addresses, and the industry under discussion). Once the database is complete, we will make it public as well as use it for research and educational purposes at Stevens.

As part of their research, the students also completed a personal research project. The students were already doing a great deal of work, so I did not make them wade into secondary literature, so these studies could doubtlessly be better connected to existing work. But what we see here is young, smart minds beginning to think through the social scientific study of technology and society.

Dan began using the database to do qualitative research on Taylor's social networks. (You can see a diagram that Dan made of Taylor's social network below.) In this post, Dan focuses on how Taylor used his personal connections to create training regimes around scientific management and, in turn, used training to increase his influence.

If you've ever had an idea for saving time and being more efficient with a repetitive task, then congratulations! You're following in the footsteps of Frederick Winslow Taylor, commonly known as the "Father of Scientific Management," a system based around efficiency in speed and motions. Taylor's ideas were simple - if a worker's routine could be made more efficient, then not only will he be more productive to the company, but he will also be less fatigued by unnecessary actions. In his own words, “The principal object of management should be to secure the maximum prosperity for the employer coupled with the maximum prosperity for each employee.” Taylor sought to implement his ideas into the workforce through a process he called "time studies." A time study essentially involved timing a worker's performance and thinking of ways to make him faster. Many times, this was done by cutting out unnecessary movements (for example, moving a box of nails closer to avoid reaching further); other times, by simply pressuring the worker into moving faster.

            Taylor believed that in an ideal world, workers would be trained in how to do their job well, and would be paid according to how much work they did. If one worker was able to complete ten products in an hour, and another was able to complete fifteen in an hour, then the second worker would be paid more, but the first worker would become motivated to try harder the next time, and would try to find ways to work faster.

            However, being the practical engineer he was, Frederick Winslow Taylor knew he could not single-handedly perform every time study in every factory that wanted his system of scientific management. Luckily for him, there were other engineers who were also interested in improving the efficiency of labor. Taylor thus began his quest of spreading the ideas of scientific management to people, rather than factories. Among his associates were other famed engineers of the early 20th century, including Morris Cooke, Carl Barth, Frank and Lillian Gilbreth, Henry Gantt, and Henri Le Chatelier. Taylor knew that if he could get more people to be taught the methods and details of scientific management, then not only would converting factories become easier, but scientific management could spread beyond engineering into other fields, such as medicine, the military, and even sports. His dream was for the whole world to operate as efficiently as possible, since he believed that under the Taylor system, the only way a worker would be unhappy in his job was if he did not want to be working at all.

A Diagram of Frederick Winslow Taylor's Social Network as Reflected in His Correspondence, by Daniel Wojciehowski.

A Diagram of Frederick Winslow Taylor's Social Network as Reflected in His Correspondence, by Daniel Wojciehowski.

            Frederick Taylor was a very specific man, and it showed in his work. He was always interested in the exact amount of time it would take a man to do even the most mundane task, such as moving his hand across a table. It’s only natural, then, that this attention to detail extended to his training of other engineers in his methods of scientific management. He would only consider a man to be “trained” if they had worked directly under someone Taylor trusted, such as Cooke at the Plimpton Press, or H.K. Hathaway at the Tabor Mfg. Co. Engineers would often spend up to a year, or even more working in factories which already employed scientific management, observing the effects of successful time studies, before Taylor would consider them ready. Sometimes, factories found this observation intrusive and disrupting to the workday, but Taylor would always sort it out with them. Optimally, Taylor would have liked to train everyone personally, but even a man as good with time as he was could not add more hours into a day.

            One notable example is Hollis Godfrey, an engineer and teacher, who wished to apply scientific management to his own teaching methods. He wrote to Taylor in 1911, asking to speak with him about the ideas behind the system. Over several correspondences, the two agreed that Godfrey would spent some time working under Cooke, before working with Hathaway and later James M. Dodge at the Link-Belt Engineering Co. Godfrey spent a year working at these three places, earning not only Taylor’s approval to work with scientific management, but also personal respect from Taylor and Cooke for his intelligence and creativity. Taylor would later recommend to others that they could hire Godfrey as a consultant on scientific management, and Godfrey testified before a House of Representatives Committee on scientific management.

            Perhaps the most enthusiastic of Taylor’s followers were Frank Gilbreth and his wife, Lillian Gilbreth. The two, both engineers and parents to twelve children, idolized Taylor and his ideas. However, when first introduced, Frank Gilbreth had trouble understanding exactly what scientific management entailed. He was managing a bricklaying company in New York and Rhode Island, and initially contacted Taylor to see about getting his own company converted to scientific management. Due to his mistaken belief that the system was a simple adjustment, rather than the complete overhaul it actually was, Gilbreth was chastised by Taylor, Cooke, and Sanford Thompson, the owner of a cement company who worked closely with Taylor. Gilbreth recognized his mistakes, however, and sought to teach himself scientific management. He was also the man to suggest that Taylor rewrite his paper, “Principles of Scientific Management,” into the form of a textbook, so that it could be studied in schools. Gilbreth would later become one of the biggest proponents for the advancement of scientific management, and worked to promote it well after Taylor’s death with numerous societies, such as the Taylor Society, and the American Society of Mechanical Engineers. He would even write a book, “Primer on Scientific Management,” which was designed as an introduction to the basics of the system, and answered many of the questions Gilbreth found himself frequently being asked.

            With Taylor’s system quickly growing more popular, it came as no surprise that other countries became interested, as well. Charles de Freminville, a French railroad and automobile engineer, and Gaston de Coninck, a French shipbuilder, both came to America to train in scientific management with Taylor, before returning home, where, alongside Henri Le Chatelier, famed chemist, they worked to promote the advancement of scientific management in France. Taylor also trained engineers from China, Germany, Finland, Denmark, and England, who all worked to promote the system in their own countries.  Additionally, Taylor’s book, “Principles of Scientific Management,” started being translated into many different languages, and when it was endorsed by engineers in other countries (such as Le Chatelier writing a foreword in the French edition), its credibility would be greatly boosted within that country.

            Over time, the system grew with such rapidity that even industries outside of engineering wanted to make use of it, and there was no reason they couldn’t. Dr. Judson Daland, who treated Taylor’s wife when she was sick for several years, often spoke with Taylor about applying scientific management to hospitals, and the Gilbreths sometimes visited hospitals where scientific management was in demand. Walter C. Camp, a college football coach and army trainer, spoke with Taylor on occasion about applying scientific management to his training methods, and Taylor had several friends in the military and government who often came to him for advice, including Admiral Caspar Goodrich, and General William Crozier. Taylor also became associated with numerous other societies interested in his work, including the American Academy of Arts and Sciences, and the American Philosophy Society.

            Frederick Winslow Taylor is often called a genius because he is the “father of scientific management,” but that’s only considering his technical genius, and not the genius in how he helped the system advance and grow. If Taylor had simply kept on converting factories by himself, the system would never have grown beyond those few factories. Instead, Taylor chose to spread the system to people, as they could then spread it beyond themselves. If Taylor hadn’t been an engineer, he would have had a fantastic career in marketing, since his system grew further than anyone ever expected it to, and now is the basis for most, if not all modern businesses.

Emission Controls, Defeat Devices, and Computerization: The Volkswagen Debacle in Historical Perspective

Volkswagen executives have admitted to the Environmental Protection Agency that the company used so-called “defeat devices” to beat automotive pollution control tests. VW’s stock dropped over 20%, and the crisis meeting its board is holding this Wednesday will likely be bloody. News reports suggest that the fiasco will cost the company at least $7.3 billion and may involve the recall of 11 million cars. The situation highlights changing aspects of the auto industryespecially the ever-increasing role of computerization in vehicles—and the need to reinvigorate American regulatory agencies, which have seen budget decreases since the federal sequester. But in misleading the EPA and the public, VW joins a long and proud tradition of companies that have tried to dodge federal standards. 

Defeat devices are not new; neither is cheating emissions tests. In 1972, executives of Ford Motor Company came to the Environmental Protection Agency (EPA) to confess that one of the company’s divisions had been faking data for federal air pollution tests. The employees of the division had been conducting engine maintenance more often than was allowed by the federal test procedure. Frequently changing spark plugs, engine filters, and the like meant that the cars ran much cleaner than normal. So, it looked like the company’s vehicles were passing the tests with flying colors when, in reality, they were frequently flunking. 

That same year EPA staff members were running emissions tests on an American Motors Corporation (AMC) car. It failed. Everyone, including AMC technicians who were observing the tests, scratched their heads. “It must have been a bad sensor,” one of the AMC techs suggested. “What bad sensor?” asked an EPA staff member. That test initiated a series of events that led to EPA administrators coining the term “defeat device.”

It turned out that auto designers at AMC were using temperature sensors to turn off its vehicles emissions controls under certain conditions. Yet, to guarantee laboratory-like reproducibility between emissions tests, federal procedures required tests to be conducted with ambient air temperatures between 68-86 degrees Fahrenheit. EPA staff members realized that AMC had been using the sensors to control emissions only in that temperature range.

While this discovery was problematic, much more worrying to regulators was the idea that other automakers could be doing the same thing. It was for this reason that administrators at the EPA created the phrase “defeat device,” issued an rule banning the use of such systems, and brought attention to the idea in the media. One auto executive later said he knew that the automakers had been beaten on the issue when he saw the words “defeat device” written in a newspaper. As a rhetorical weapon, the phrase was just too good.

Since the 1970s, the EPA has occasionally found automakers using systems that functioned as defeat devices. For instance, in 1995, the agency sued General Motors over an engine control system that turned off emissions controls outside of test-like conditions. Industry resistance to regulations remains a real problem and likely always will.

The fact that Ford, AMC, and GM tried to beat federal air pollution regulations and almost no one remembers undermines some arguments we see around the VW case. On Twitter and via other media, I’ve seen individuals say that VW’s reputation has been ruined forever. These people are seriously overestimating our collective memory. 

One thing that makes the VW case different than Ford or AMC ones is that it involves the use of computer software. As the AMC case shows us, however, defeat devices are not about computers but about auto engineers using technical systems to mislead regulatory authorities. They are dishonest technologies.

Still, the presence of computers raises important questions and issues. The simple reality is that the regulatory agencies focused on the automobile—both the Environmental Protection Agency and the National Highway Traffic Safety Administration (NHTSA)—have not kept pace with the changing role that computers plays in automotive systems. During the Toyota recalls of 2009-2011, some individuals hypothesized that the computers in the company’s vehicles were causing “sudden unintended acceleration.” When NHTSA administrators testified on the matter in front of Congress, they were forced to admit both that they themselves had little understanding of computer systems and that they did not have any electrical or software engineers on staff. The agency turned technical analysis of the Toyota cars over to NASA (which did not find anything wrong with the computer systems).  

This knowledge gap is part of a more general problem around government and computing. A colleague of mine, Arjan Widlak, runs the Kafka Brigade, a consulting group that helps organizations overcome the problems of red tape.  Widlak has consulted governments in many parts of the world, and he has said that he consistently finds blind spots around computing in such organizations. Part of the issue seems to be one of responsibility. Computing has entered nearly every part of society, and yet an individual who already has responsibilities within a bureaucratic organization may not necessarily see computerization as part of his or her job. It's not that things should work this way; they just do. The result is that in many areas, including auto regulation, computerization has blindsided organizations.

Well, what to do? 

Wired has one answer. In a rather unfortunate piece of writing, titled “The EPA Opposed Rules That Could’ve Exposed VW’s Cheating,” Alex Davies calls attention to the agency’s opposition to changes in intellectual property law. The software in automobiles is covered by the 1998 Digital Millennium Copyright Act (DMCA), which makes it illegal to hack through “technological protection measures” or reverse engineer software. The Library of Congress issues exemptions to the law, and a “group of proponents” lobbied the Library to make an exemption for automotive software. The EPA opposed these efforts because its staff members worried that hobbyists hacking the software in their cars would change the engine performance parameters in such ways that the cars would violate federal air pollution laws. 

(By the way, people are already hacking their cars' engines, and when they do so to increase performance, their cars DO violate federal emissions standards. I brought the moral problems with this up with a guy who digitally tinkers with his engine and who puts it back to factory-settings before taking the car in for his annual emissions inspection, and he just shrugged.) 

It’s here that Davies article enters the realm of speculation. There are a lot of words and phrases like “could” and “it’s possible" and “good chance” in the piece. It's a real stretch. He writes, “In opposing the exemption for individual car owners to examine the software, the EPA would close an important avenue for uncovering security and safety issues in vehicle software, because often these kinds of issues are uncovered by individual research while simply examining their own product or vehicle for fun or curiosity, not formal research.” In other words, Davies uses the Volkswagen debacle to push the classic cyberlibertarian ideology, wherein intellectual property is evil (“Information wants to be free, man”) and hackers and makers will save the world. 

There is something slightly off-putting, even gross, about using this tragedy to promote such an agenda, but that isn’t even the point. I am fine with the federal government making automotive software exempt from the DMCA, an extremely flawed law to be sure, but the idea that the hacker, or reverse engineering, equivalent of citizen science is even a partial answer to corporate malfeasance around regulated technologies seems slightly crazed. The suggestion certainly lacks any sense of proportionality. The US federal government did not drop some regulated emissions in automobiles by 99 percent or drastically reduce the number of people killed and injured per mile driven by relying on tinkerers. Such change requires organizational capacity, real resources, and expertise.

Which brings us to the true solution to our current plight around automobiles, computers, and regulation. Watchdogs have criticized federal auto safety and air pollution on enforcement for years, and it has been nice to see these agencies recently step up enforcement efforts. But the reality is that these organizations face a shortfall of resources, especially since the federal sequester. Regulatory aggression does not amount to much without resources. What we are witnessing is a virtual deregulation via defunding. There are other historical examples of this. For instance, the number of automobiles recalled for being unsafe dropped drastically when Ronald Reagan became president and cut federal budgets.

The only way to solve our problem is to build organizational capacity in these regulatory agencies around computers. Doing so will require real resources. Computer scientists and other technical experts don’t come cheap. But truly this is our only option. We need offices of dedicated experts going over and testing automotive computer code for the sake of safety and clean air. 

Put simply, if we value our lives and our health, we should value the organizations that we built to protect them.

Why No One at General Motors is Going to Jail, Why Maybe Someone Should

Last week brought news of the settlement between the US federal government and General Motors over faulty ignition switches the company installed in millions of vehicles, which have been potentially linked to over 150 deaths. In the settlement, GM agreed to pay $900 million in fines—this is in addition to over $625 million the company expects to pay to victims and the billions it has spent on recalls—while federal prosecutors have signed off on a "deferred prosecution agreement" and will not seek conviction against any GM employees.

Already in July, the chances of the federal government seeking criminal convictions against individuals was dwindling, despite the fact that the firm's employees (apparently) actively covered up the defect for years. As one news outlet put it, the problem was "legal loopholes" in American auto safety regulation. Put simply, the National Traffic and Motor Vehicle Safety Act of 1966—the law that enabled national auto safety standards in the USA—did not include criminal sanctions. Why? Answering that question requires returning to the law's history.

Already in the 1950s, forward-thinking lawmakers, like Rep. Kenneth Roberts, and safety advocates, like the rocket-sled-riding Col. John Stapp, were pushing for automakers to adopt safety standards that made cars safer during crashes. Most earlier efforts had focused either on making drivers better, for instance, by mandating driver's education or on promulgating standards that increased drivers' control over the car, including the kinds of headlight and brake tests that became a part of annual auto inspections. In opposition to these earlier approaches, Roberts, Stapp, and other safety thinkers of the 1950s joined hands around a theory known as the "second collision." The core idea of the theory was that drivers and passengers were not injured during the "first collision" between their car and another object, like a tree, but in the "second collision" between their bodies and the car's interior. The safety standards that these thinkers pushed, sometimes known as crashworthiness standards, were meant to reduce injuries by softening the car's interior and removing knobs, handles, and other objects that might gouge or beat the body during crashes. But auto safety advocacy in the 1950s did not get far. The automakers resisted it vehemently at almost every turn.

In the early-to-mid 1960s, however, political pressure mounted to change auto design. Partly this was a result of auto safety advocates gaining new positions of power in the federal government. Daniel Patrick Moynihan, who had worked on auto safety for the State of New York, became Assistant Secretary of Labor and promptly set up a commission on the topic. Abraham Ribicoff, who had initiated auto safety initiatives as the Governor of Connecticut, was elected Senator. Eventually, even President Lyndon Baines Johnson, in full Great Society swing, joined the call for safer cars.

In their new roles, both Moynihan and Ribicoff worked with a young lawyer named Ralph Nader, who was one of the foremost experts in the nation on the political and legal dimensions of auto safety. Nader published his book, Unsafe at Any Speed, in 1965, and he helped Ribicoff's staff members write the bill that would become the National Traffic and Motor Vehicle Safety Act.

A Young Ralph Nader

The passage of that law was far from certain, however, until GM President James Roche was forced to admit before the Senate that the company had hired private investigators to peer into Nader's private life. The investigators asked Nader's acquaintances about his sexuality, mental health, and political associations and may have hired an attractive woman to make a pass at him.  Roche's confession made Unsafe at Any Speed a best-seller, turned Nader into an icon, and guaranteed the creation of auto safety standards that cover every vehicle sold in the United States. The auto safety act passed both houses of Congress unanimously.

Yet, as is so often the case, the National Traffic and Motor Vehicle Safety Act was a compromise between several bills. For instance, the first bill that the Johnson White House sent would have enabled the Department of Commerce—an agency that many saw as industry-controlled—to set safety standards if the auto industry failed to create its own. All of the bills from the Congress and Senate were stronger than that, but they were weaker than Nader and other safety advocates hoped. Most important, the final compromise bill dropped provisions that Nader and Ribicoff included, including criminal penalties for corporate executives and others who sold produced and sold unsafe cars. Congressional political processes weakened the resulting law.

In this way, the National Traffic and Motor Vehicle Safety Act lacked teeth: it enabled the federal government to set safety standards that all automakers had to follow, and it gave federal administrators the power to force recalls and set civil penalties for offending corporations, but it contained no criminal provisions to put automaker executives and employees behind bars.

In the wake of this new General Motors settlement, there have been calls to revisit the National Traffic and Motor Vehicle Safety Act amend it with criminal provisions. It's a discussion worth having.

I do not believe that it's a forgone conclusion that such criminal provisions are a good. Certainly, it would be emotionally, maybe even morally, fulfilling to punish individuals who knowingly make unsafe vehicles and cover up such knowledge. But in the end, the law and public administration around matters of safety should aim pragmatically to save us from death and injury. We should think carefully about what kinds of incentives criminal penalties would create for individuals within auto companies. Would criminal provisions make individuals more likely to report problems and generate recalls? Or would it have exactly the opposite effect, leading to even more conspiracies of silence?

Federal agencies have a tradition of trying to create incentives that encourage industry self-policing and self-reporting. For example, in their first years, both the National Highway Traffic Safety Administration and the Environmental Protection Agency decided that they would not bring the full weight of the law to bear against companies that self-reported problems. Harshly punishing self-reporting companies would create an environment where no one in their right mind would self-report.

It seems that adding criminal provisions would also change incentives around auto safety standards: such provisions may lead to more reporting and whistle-blowing, or they may foster more cover ups. They should be approached with great care, but if there was ever a time to consider them, it is now.

Snake Oil for the Innovation Age: Christensen, Forbes, and the Problem with Disruption

A new study by Andrew A. King and Baljir Baatartogtokh in the MIT Sloan Management Review has marshaled the best evidence yet to show that Clayton Christensen’s theory of disruptive innovation is deeply flawed. The study—which is put in its larger context in this insightful article by Evan Goldstein in the Chronicle of Higher Education—found that only 9 of 77 cases that Christensen used as examples of disruptive innovation actually fit the criteria of his own theory. Economists and business school thinkers have known that disruptive innovation was problematic for quite a while. For instance, the economist Kenneth Simon wrote a paper on the topic as early as 2009. But the dissent of these thinkers rarely entered public view, perhaps because speaking up would have gone against the academic code that forbids publicly attacking one’s own.

As Clayton Christensen Falls, So Fall His Popularizers

Indeed, the current assault on Christensen’s thought was led not by an economist or business school-type but by Jill Lepore, a professor of history at Harvard University. Lepore’s 2014 New Yorker essay, “The Disruption Machine: What the Gospel of Innovation Gets Wrong,” set the business world on fire. Some were glad that someone had finally spoken up. Others thought that Lepore was basically right but that she could have done a better job with some of her points, should have talked to more experts in the field, and could have toned down her fiery rhetoric. Christensen and his defenders, to put it mildly, went on the attack. Christensen asserted that Lepore’s article was “a criminal act of dishonesty.” But the scales of truth aren’t tipping in Christensen’s direction.

What does the new study by King and Baatartogtokh mean? King has written another article with Brent Goldfarb that shows as many as 1/3 of studies published in management journals may be wrong or make inflated claims. (The King-Goldfarb article fits a recent genre of studies that have found similar problems in the psychology and biomedical fields). But King and Goldfarb take a charitable interpretation of how people end up getting things so wrong. In an academic world that is marked by constant pressure and that puts high value on novelty, scholars work and re-work their data until they find something interesting but which good analysis would show to be chance. “These are not cases in which you are trying to put something over on someone. You’re fooling yourself,” Goldfarb has said

Perhaps that’s right, but I think we also need to consider cases where individuals’ professional and financial interests conflict with good (social) science and where these interests lead individuals to continue pushing an idea even when it has been thrown in doubt. After all, Christensen himself has co-founded a consultancy, Innosight, and an investment company, the Disruptive Growth Fund, based on the notion of disruptive innovation. He has made millions on this concept.

We find even bigger problems when we look at the world of popular business publishing, however. Pop business books and magazines are a form of self-help. They purport to base their suggestions on the best available “facts,” but often the analysis undergirding these facts is not terrific. Few ideas have gripped the pop business world as tightly as disruptive innovation, perhaps, as some economists have suggested, because business leaders fear nothing more than being “disrupted.” If Christensen is threatened, so too are the popularizers of his ideas.

It is not surprising then that we see the business press lashing out at Christensen’s critics. In his Chronicle piece, Goldstein notes that a Forbes magazine writer wondered whether Lepore’s critique arose from personal doubts: “Was Lepore unconsciously projecting onto Christensen and his theory her own well-justified anxiety, panic, and fear about the ‘disruption’ of the humanities itself at Harvard?”

But now it seems like the tables are turned: the Forbes magazine writer was projecting onto Lepore his own well-justified anxiety, panic, and fear about the disruption of disruption because the disruption of disruption means that Forbes peddles snake oil.

A search for “disruption” on the Forbes website gets 6,039 hits with titles like “Tesla’s High End Disruption Gamble” and “The Physics of Disruption.” A search for “disruptive” garners another 4,704 hits with “A New Met Exhibit Shows Why the Saxophone Was One of the Most Disruptive Innovations in Music” and “Is Donald Trump a Disruptive Political Innovation?” Chunka Mui, a Forbes.com writer (with whom I’ve tussled before), has predicted that autonomous vehicles will unleash a market worth trillions of dollars and “create enormous disruption to current automakers business models.”  But as Goldstein and others have shown, Mui’s own master, Christensen, has made many terrible predictions when it comes to disruption. True disruption is rare, and prophesying it is a fool’s errand, especially when its underlying social science is so faulty.

There is no shortage of publications that promote bad science, whether that comes in the form of climate denialism, faulty studies of tobacco or sugar consumption, or fear-mongering over GMOs, cell phone signals, or fluoridation. Writers such as George Johnson, John Horgan, and Naomi Oreskes and Erik Conway ceaselessly point out such problems. What we have not taken note of as often is how bad social science is pushed by consultants, like Clayton Christensen, and business publications, like Forbes. Yet, their snake oil too has done real harm.

An Open Letter to George Johnson

Dear George,

I read your interesting New York Time’s column, “The Gradual Extinction of Accepted Truths” (online title “The Widening World of Hand-Picked Truths”). In that piece, you worry that the social world has divided into camps or tribes, each its own “self-reinforcing bubble of thought.” You write, “Viewed from afar, the world seems almost on the brink of conceding that there are no truths, only competing ideologies—narratives fighting narratives.” The end result, you argue, is this: “Presenting people with the best available science doesn’t seem to change minds.”

When I read your words, I hear despair. Despair over the state of the world, despair over the place of science in society, despair over where we are headed. I share your despair, especially when it comes to our inability to take climate change science seriously and enact meaningful policies around it, but I do not share your reasons for despair. It is these differences of reason that I wish to discuss.

I saw yesterday that you have another critic, Alex Tsakiris, who attacks you harshly for putting faith in “status quo science” when it has proven again and again—and often in the cases you cite, like vaccines, fluoridation, and climate change—that it is not trustworthy. I am not a “skeptic” of Tsakiris’s type. I mostly buy into the causes you defend (though I think you neglect the historical circumstances for why people have come to distrust science since the 1960s).

I also am not a “postmodernist” in the sense that word seems to hold for you of a thorough-going relativism. I believe that there are better and worse ways of coming to know the world and that there is typically some best-available-knowledge, though such knowledge is always open to revision. “Lurking out there is some kind of real world,” you write. For sure, but it frequently eludes us. I guess in this schema, I am a plain-old modernist then, as you are, but it is our modernisms that are in disagreement. Put simply, I believe that you do not take the best available social science into account and that your failure to do so is pungently ironic.

Your column begins, in 1966, with the religion editor of Time magazine asking, “Is God Dead?” You write that “it was natural” for the editor “to assume that people would increasingly stop believing things just because they had always believed them.” But now “almost 50 years later that dream seems to be coming apart.” And near the end of the piece, you mention a “widening gyre of beliefs.” These statements, as well as the print and online titles of your piece (which I realize may have been chosen by editors), make it sound as if you are making a historical argument: in the mid-20th century, we were headed towards wide-scale acceptance of science, but it has gone off the rails. But do you have any historical evidence to support these claims? Put another way, who in the mid-20th century held this dream? Was it widely held? Or was it the domain of relatively well-educated elites, like this editor at Time? What evidence do you have of expansion or contraction?

You and I are in perfect agreement that the Internet has increased the formation of belief subcultures, not only when it comes to groups like anti-vaccers but also, like, ”furries”—people who enjoy dressing up as and pretending to be animals. Yet, since at least the 18th century, individuals have been able to choose media sources, whether newspapers or magazines or cable television, that fit their prejudices, and society didn’t need the Internet to create many subcultures with bizarre beliefs.

These historical questions bring us to the more fundamental issue in your argument: it seems to suggest that we should just accept the findings of science (because it is rational to do so), but the best-available studies of how humans react to and take up information suggests that they have NEVER acted that way.

You might be referring to these studies when you write, “In a kind of psychological immune response," people "reject ideas they consider harmful.” But it is ambiguous in your article whether you believe this response is simply a moral failing or whether you think it is part and parcel of being human. Social science research increasingly finds it to be the latter. What you call an “immune response,” social scientists refer to as the “backfire” or “boomerang” effects. In a 2010 study, Brendan Nyhan and Jason Reifler presented information to hundreds of participants about tax policy, stem cell research funding, and the presence of weapons of mass destruction in Iraq. The authors found that participants who had false beliefs about these things actually held onto their misperceptions more strongly after being exposed to facts.

P. Sol Hart and Erik C. Nisbet conducted a similar study on beliefs about climate change. As they note, many scientists and journalists adhere to “the deficit model, which assumes that increased communication . . . about scientific issues will move public opinion toward the scientific consensus.” But they find that the exact opposite occurs when conservatives are confronted with science that conflicts with their preexisting worldviews.

These studies fit within a larger literature on “motivated reasoning,” the idea that preconceptions strongly influence how later information is viewed and interpreted. Charles S. Taber and Milton Lodge, two of the leading scholars in this literature, have conducted studies demonstrating that individuals seek out information that confirms beliefs that they already hold and that they put more cognitive resources into denigrating and taking apart arguments that don’t fit their worldview. Moreover, they found that people who are better informed and more sophisticated actually have stronger biases, not weaker ones. As Taber and Lodge write, “Far from the rational calculator portrayed in enlightenment prose . . . homo politicus would seem to be a creature of simple likes and prejudices that are quite resistant to change.”

In a related and well-reasoned essay, the sociologist John Levi Martin argues that individuals’ beliefs are largely a product of where they fit within social networks, that “politics involves the establishment of webs of alliance and opposition, and this in turn is used by political actors to generate opinions.” Furthermore, ”the ‘knowledge’ that ideology gives us is that which would justify our side and strip our enemies of their justification.” That is, if the other guys think it, it must be hogwash.

None of these studies are “postmodernist.” They are based on the belief that we should study social reality empirically with the best available methods and ideas at hand. Jacques Derrida would not touch them with a twelve-foot pole.

Furthermore, findings like these aren’t even new. Indeed, they precede the Time essay that you take to be so meaningful. In the 1940s, when the sociologist Paul Lazarsfeld and his colleagues studied the influence of mass media on political elections, they a model called the “two-step flow of communication.” They argued that most people did not get news directly from mass media but rather through influential people in their lives, who Lazarsfeld et al called “opinion leaders.” While subsequent studies have questioned some part of Lazarsfeld’s model, they typically uphold the idea that human beings do not learn about or interpret information on their own but rather as part of a social group, and that influential figures play an important role in whether new information is seen to be relevant, how it is understood, and what it is taken to mean for subsequent decisions.

All of these social scientific studies imply real consequences for how individuals encounter new information, including scientific findings. To paraphrase something John Levi Martin once said to me, “If we get exposed to information that cuts against our opinion, we are less likely to understand it. If we understand it, despite this, we’re less likely to believe it. If we believe it, despite this, we’re less likely to remember it. If we remember it, despite this, we’re less likely to think it has strong implications for anything in particular.”

Obviously, people change their minds, and none of the studies above suggest otherwise. What they do suggest, however, is that changing our minds often goes hand-in-hand with changing who we choose to affiliate with. I know this from personal experience. I was raised in a conservative household and extended social network, which taught me that humans once lived with dinosaurs and that homosexuals are sinners. I no longer believe either of these things (neither do my parents, by the way), but I was also determined to leave that social network behind. My close-knit group of friends made the same decision. When a high school teacher asked one of my best friends to write a personal essay about his goals, he wrote, “My goal is to get the fuck out of Joliet, Illinois.” I exited that town and joined academia, which is full of atheists, humanists, and people just like you. Put another way, just as recovering alcoholics avoid hanging out with old drinking buddies, the best way to buy into the Big Bang is to exit Young Earth Creationist groups.

These findings have many fascinating implications for science communication. For instance, they suggest that (sadly perhaps) who is speaking is often more important than what he or she is saying. (Surely this idea unsettles proponents of scientific reason.) Let me give an example: Al Gore won the Nobel Peace Prize for An Inconvenient Truth, but when it comes to having a spokesperson for global climate change, it is hard to imagine someone worse. Indeed, An Inconvenient Truth likely damaged the chances for meaningful climate policy in the United States. Why? By the time An Inconvenient Truth was released, conservatives had loathed Gore FOR YEARS. In his 1993 book, See, I Told You So, Rush Limbaugh included a chapter titled “Algore: Technology Czar,” in which he lambasted Gore for being a lackey and a tree-hugger who foisted unfounded “scientific” beliefs on the American public.

I am not a “ditto head,” and I do not agree with Limbaugh about Gore. I believe that the world is a better place because of Gore’s leadership, because of his environmentalism and, yes, including because of the policies he shaped around the Internet. The point is, however, if the goal was to change the minds of non-believers in climate change—who are mostly conservative—An Inconvenient Truth was an abject failure, and yet it is milestone of clear scientific communication. It’s just that clarity isn’t the point. When conservatives see Al Gore talking about climate change, they see the spotted owl, or they laugh to themselves, “Al Gore invented the Internet,” or they see a white stain on Monica Lewinsky’s blue dress. They do not hear what Gore says, or if they hear, they do not engage it.

I have brought these ideas up to our mutual acquaintance, the science writer, John Horgan, my buddy and colleague at the Stevens Institute of Technology. John deals with these social scientific findings in this way: first, he acts confused as if he does not understand them. Second, he discounts them. (“That sounds too postmodernist to me,” even though the theory and methods undergirding these findings are quite far from academic postmodernism.) Third, he fails to do his own research in these matters or dig deeper into the available studies or refute them.  Fourth, he does not take the findings to mean anything in particular for his own life. In other words, Horgan acts exactly like how John Levi Martin suggests that, say, conservative Evangelical Christians act when confronted with evolutionary science. It appears that Horgan is human.

There are good reasons for Horgan to play ostrich-like and stick his head in the sand when it comes to this vein of social science research. If he took it seriously, he would have to change his whole modus operandi, just as you would have to change yours. And Horgan has spent thirty years carefully crafting his identity as a curmudgeonly herald for good science! After all, his Scientific American blog is titled “Cross-Check,” a reference to a violent hockey move, and it promises to take a “puckish, provocative look at breaking science.” If Horgan took the findings of these studies on board, he would have to change the tone of his communication and indeed spend more time traveling the country talking to Evangelical ministers and other influential figures with whom he shares very little but who, if he were to win them over, would do a great deal for his various causes. But do we really want Horgan to sacrifice this puckish image that we all love and admire, even if maintaining this mode of communication means that he risks being a choir-preacher for the remainder of his days?

There are structural and cultural reasons why scientists and journalists avoid thinking through the kinds of social scientific findings discussed above: both scientists and journalists are—in their idealized image—dedicated to the dogged pursuit of truth and the idea that presenting objective “facts” to the public will improve the world. They do not want to face up to the reality that the second part of this belief system—that humans interact with facts in a rational and unmediated fashion—is based on lousy anthropology and cruddy psychology. And, yet, I believe that there is no greater need in the area of science communication for scientists, journalists, and others to deal with than exactly this one. Otherwise, we are lost.

In the end, we are left with a darkly funny, tragicomedy, perhaps written by Samuel Beckett’s ghost: in one room, we have a public meeting of some neo-hippies and homeopathic medicine types who cry out that cell phone signals are causing cancer. They reject the best available science on this topic. Next door, we have a room full of irascible and curmudgeonly science journalists waving their fists in the air, lamenting the fact that so many people in our society do not take science seriously. Yet, these journalists reject the best available (social) science about how human beings actually behave.

When the curtain falls, there is no light.

Sincerely,

Lee Vinsel

 

PS: I would like to thank my Stevens colleagues Lindsey Cormack and Kristyn Karl for more deeply educating me about motivated reasoning. I'm so happy to have you two aboard our little ship.

 

The Maintainers: A Call for Proposals

Last year, when Walter Isaacson published his book, The Innovators: How a Group of Hackers, Geniuses, and Geeks Created the Digital Revolution, historians met it with a good deal of skepticism. For instance, members of the computer history organization, SIGCIS, questioned Isaacson's mastery of the basic facts, pointing to problematic statements the author made both in the book itself and in promotional interviews.

Other criticisms challenged Isaacson's interpretation in a more fundamental way. For example, in the book's introduction, Isaacson espouses twin ambitions: first, he wanted to draw attention to the central role that groups, teamwork, and "collaborative creativity" played in the invention of digital technologies. (He incorrectly regurgitates long-debunked myths that earlier technologies depended on solitary geniuses--Edison, Bell, Morse--and implies that collaboration is something particularly true of the digital.) Second, he wished to spell out "the social and cultural forces that provide the atmosphere for innovation," including the "research ecosystem." In other words, he aspired to write history. Yet, The Innovators is fundamentally a work containing a series of mini-biographies. Isaacson misses opportunities to write about groups and their dynamics in a deep way. And throughout the book, he highlights the "ecosystem" of invention only selectively, missing the crucial lesson that all technologies develop in specific historical contexts.

It gets worse. The Innovators biggest problem is that it's called The Innovators and is written in the dialect of innovation speak, perhaps the dominant ideology of the day, beloved of Silicon Valley-headed libertarians, TED Talkers, Wall Street business hustlers, and Republican and Democrat presidents alike. While Isaacson admits that innovation has become a buzzword, he repeats the ideology uncritically. He promises "to report on how innovation actually happens in the real world," but his framing almost totally neglects the real world as it actually exists.

Isaacson's neglect is problematic in many ways, and like much writing on innovation, it gives us a narrow and skewed picture of life with technology. For this reason, the historian of technology, Andrew L. Russell, author of Open Standards and the Digital Age, has proposed that scholars produce a volume that responds to Isaacson's book, with the following title:

The Maintainers: How a Group of Bureaucrats, Standards Engineers, and Introverts Made Digital Infrastructures That Kind of Work Most of the Time

Since Russell produced this parody, a number of scholars in science and technology studies have discussed developing its core insight. The discussion has moved well beyond the realm of digital technologies to include all forms of infrastructure, the mundane labor that goes into sustaining everyday life, and the people who are left out or fare poorly in our current technological arrangements, both within rich, industrial nations and outside of them.

Our ultimate dream is to hold a conference or workshop or even a series of gatherings on this theme and perhaps to produce an edited volume on it. We would like to begin the discussion at the annual meeting of the Society for the History of Technology (SHOT), which will be held in Albuquerque, New Mexico, October 8-11, 2015. The hope would be to propose a few interconnected panels for that conference. Proposals are due on March 31st.

I am writing this blog post to find like-minded individuals who are interested in exploring this the history of maintenance, infrastructure, and mundane labor, broadly construed. We believe that such investigations could have practical upshots, and we are especially keen to involve practitioners, including standards engineers, forensic engineers and architectures, managers in charge of safety and maintenance, policymakers who focus on upkeep and infrastructure health, and others involved in such pursuits. Furthermore, this effort must have an international and transnational dimension, including work on "developing nations." (Some of us, for example, are interested in the development economist, Albert O. Hirschmann's insistence that, to survive, societies must develop a "maintenance habit.")

We are not pretending that we are breaking fundamentally new ground here. This push builds on the back of Ruth Schwartz Cowan's and David Edgerton's calls to focus more on mundane or everyday technologies, for example, as well as on the large body of works on the history of infrastructure written by scholars including Joel Tarr, Mark Rose, Bruce Seely, Amy Slayton, Steven Usselman, Susan Leigh Star, Geoffrey Bowker, Paul Edwards, Scott Knowles, and Chris Jones, to name a few. In other words, our debt is immense.

Still, we believe that the theme is worth pursuing and that, in pursuing it, we might push back, however slightly, on our society's too shallow, too easy, and too sleazy obsession with "innovation."

If you are interested in what I have written above, including proposing a paper or panel for SHOT, please drop me a line at leevinsel@gmail.com.

Hitting Dogs with Hammers: Animals, Auto Safety, and the Angel of History

In the early 1940s, E. S. Gurdjian and Herbert Lissner, two researchers at Wayne State University, conducted the following experiment:

“First, they took a number of ‘mongrel dogs weighing 5 to 15 kilo,’ and with the dogs ‘under intravenous nembutal anesthesia,’ they cut off the dogs’ scalps and carved away the dogs’ masseter muscles.[1] The researchers then stopped all bleeding, before they 'carefully dried and polished' the skull. Next, they affixed a Baldwin Southward SR-4 stress gage to it. Then, Gurdjian, Lissner and their associates would prepare the 'pressure plugs,' devices that measured the internal pressure of a liquid filled space. The would use a 'number 25 drill' to make two holes in the dog's head. They filled these holes with the pressure plugs, ensuring that the wires made 'contact with cerebrospinal fluid and the brain.' Then the Wayne State researchers hit the dogs on the head with radial hammers, often repeatedly, sometimes—with breaks between strikes—for hours at a time. Using the strain gage and the 'pressure plugs,' the researchers could study both the structural deformation of the skull as well as changes in its internal pressure as the hammer struck the dog's head, thereby learning about the functions of concussion in real time."  

A (living) dog's skull prepared with strain gages and "pressure plugs" for a study of concussion at Wayne State University. Once prepared, the dog would be hit over the head with a hammer, often repeatedly.

A (living) dog's skull prepared with strain gages and "pressure plugs" for a study of concussion at Wayne State University. Once prepared, the dog would be hit over the head with a hammer, often repeatedly.

This winter break, most of my work energy will go into writing the third chapter of my book, Taming the American Idol, from which I’ve drawn the quotation above. The chapter examines the rise of two new sciences of automotive risk, namely the science of photochemical smog and other forms of automotive air pollution and impact biomechanics, or the study of how physical forces affect biological entities. These two bodies of scientific knowledge emerged between about 1940 and 1960, the period of time that my book’s third chapter explores. These sciences laid the foundation for federal automotive safety and pollution control regulations that became law in the 1960s and 1970s. Moreover, since fuel economy standards in the United States are based on information from auto air pollution tests, you could say that these sciences undergird basically all of the major US auto regulations.

Studying impact biomechanics brings me to a dark place, however. So much of the field’s work was based on experiments on living non-human animals. Cadavers of both the human and non-human variety also played a central part—perhaps the central part—but frankly experiments on the dead do not bother me (that is, as long as the deaths were ethical, which in the case of non-human animals, they often weren't). I first wrote about the role of animal experimentation in impact biomechanics in a paper that I delivered at the 2012 meeting of the Society for the History of Technology in Copenhagen. In that paper, I concluded with a section on ethics, animal rights, and history. I was deeply moved when I wrote the paper—honestly, to the point of moral disgust—and I couldn't end the paper without explicitly addressing the politics and morality involved in these experiments.

The Wayne State researchers applied strain gages and other devices not just to skulls but to many other parts of living animals as well. This image if from a textbook they wrote on the use of strain gages. 

The Wayne State researchers applied strain gages and other devices not just to skulls but to many other parts of living animals as well. This image if from a textbook they wrote on the use of strain gages. 

I agree with Martha Nussbaum that, contrary to the claims of some conservative philosophers, disgust is a destructive emotion, which doesn’t help but hinders moral thinking. I also believe that disgust shuts down our capacities to be good historians and social scientists. So, when I say that my research into impact biomechanics moved me to disgust, I mean that I wigged out. I had to do some self-work to un-wig myself and get back to work. Writing the concluding section of my paper was an initial step in that direction. That ethical section, however, will not fit in Taming the American Idol, which is a narrative history of auto regulation in the United States. So, I am putting the section here. First, though, I will give a sketch of the rise of animal experimentation in impact biomechanics. For a deeper and more nuanced account of this history, you’ll have to wait for my book.

In the United States, the field of impact biomechanics is really best thought of as an instance of multiple independent invention. Several groups across the country began doing work that would later be called impact biomechanics in the late 1930s and early 1940s, but they initiated that research for their own idiosyncratic, local regions, not because they were a part of some larger discussion. Only later did these researchers coalesce into a community of inquiry. From the beginning, impact biomechanics was a hybrid field that brought together engineering disciplines with biological ones, including medicine. That hybridity can be easily seen at Wayne State, the only case I’ll address in this post. 

In 1939, the surgeon and Wayne State professor E. S. Gurdjian became interested in the biological mechanisms of concussion. He had grown weary of watching people die from head injuries in the emergency room—especially those caused by auto accidents. He realized that to study such injures you needed to be able to measure and think about forces, something that lay outside of his expertise. So, he reached out to engineers at Wayne State. Soon, he had connected with the young mechanical engineer, Herbert Lissner, who was an expert in materials testing. Gurdjian’s and Lissner’s initial research involved studying skull fracture. They painted industrial lacquer on dried skulls and dropped the skulls on steel plates. The lacquer enabled Gurdjian and Lissner to observe what parts of the skulls were most prone to fracture. 

These skull studies had limits, however. Soon the Wayne State researchers had moved onto cadaver research. But more important, Gurdjian and Lissner realized that if they were going to understand the biological responses inherent in processes like concussion they would need living subjects. Since inducing concussions in humans was ethically problematic even in the 1940s, the researchers turned to living animals—in the case of Wayne State, mostly dogs. In the beginning, the Wayne State researchers lacked their own research facilities and equipment, but they were able to find available resources at one of the automakers' facilities. One impact biomechanics researcher later joked, "Taking anesthetized dogs into the auto plant at 10 P.M, past the plant protection, makes quite a story."

Wayne State researchers wondered if multiple impacts on the skull created cumulative concussive responses or if the brain and its related organs and processes returned to a baseline between impacts. Of course, multiple impacts are sometimes an …

Wayne State researchers wondered if multiple impacts on the skull created cumulative concussive responses or if the brain and its related organs and processes returned to a baseline between impacts. Of course, multiple impacts are sometimes an issue for humans, including when they box or play football. To explore this issue, researchers hit drugged dogs over the head with a hammer once every half an hour. In the experiment pictured here, the dog was struck thirteen times. In other words, the experiment lasted at least 6.5 hours. Then the dog died. 

Over time, the Wayne State researchers developed their own research facilities. Most famously, they devised a—perhaps unique—scientific instrument: they removed an elevator from its shaft and began dropping cadavers and living non-human animals down it. As one Wayne State researcher recalled, "A light frame mounted at an angle of about 30 degrees to the horizontal was guided on vertical rails and could be dropped from the required height to achieve the desired velocity. The cadaver was placed on the frame with the head extended out over the end of the sled. A heavy steel plate was placed in position for the head to hit." 

The Wayne State Accelerator was an elevator shaft with the elevator removed. Researchers could use pneumatic forces to shoot bodies up the shaft or simply drop human cadavers and living and dead non-human animals down it.

The Wayne State Accelerator was an elevator shaft with the elevator removed. Researchers could use pneumatic forces to shoot bodies up the shaft or simply drop human cadavers and living and dead non-human animals down it.

Many years later, when the author Mary Roach visited the facility for her book, Stiff, researchers purchased "Smurf-blue leotards" from an unwitting local dance store and placed them on cadavers to keep all of the various bits together during the impacts. 

In my longer conference paper, I had a section that described why researchers believed that non-human animals made good models for humans in biomechanics research. (I’m happy to share the longer essay with anyone who is interested. Just email me at leevinsel@gmail.com.) My claim was that biomechanics researchers gave three reasons for using non-human animals as test subjects. Animals made good models 1) because of their likeness to humans, 2) because they acted as a supplement or complement to cadaver studies and other safety research, and 3) because they acted as stand-ins where little data was available on human subjects. But there is and was a larger issue at hand. In many, if not all, cultures, the image of "man" has always depended on the image of the "animal." We seem to require other living beings to define ourselves. As a recent reader in animal studies argues, "Animals . . . are so deeply enmeshed in human self-conception that if they did not exist we would need to invent them."[1]

Three photos: An embalmed dog placed in a testing frame. The device that was fixed to the dog's vertebra to study forces under acceleration. The dog in the frame put inside the Wayne State Accelerator (elevator shaft). The same kinds of st…

Three photos: An embalmed dog placed in a testing frame. The device that was fixed to the dog's vertebra to study forces under acceleration. The dog in the frame put inside the Wayne State Accelerator (elevator shaft). The same kinds of studies were conducted using living animals.

Since I began researching this topic, many people have asked me about its moral dimensions. Most often they want to know whether I condone these experiments on non-human animals. They want me to judge the researchers. This part is not hard in my view. I cannot imagine a possible world where it is moral to hit a dog over the head with a hammer for nearly seven hours. I believe, however, that this question about justice and the act of judging the researchers are not the really difficult, or even the most important, moral problems involved in this research. These experiments also connect to more difficult and subtle moral questions, for instance, having to do with the banality of evil. But, again, the quandary of these particular banalities is that they have departed the laboratory and influenced the shape of our world.

Researchers at Tulane argued that "despite some differences, the ratio of head mass to cervical spine cross-section in rabbit and in man is in reasonable geometric proportion. Moreover, the Belgian hare, if properly positioned, is capable of sitting…

Researchers at Tulane argued that "despite some differences, the ratio of head mass to cervical spine cross-section in rabbit and in man is in reasonable geometric proportion. Moreover, the Belgian hare, if properly positioned, is capable of sitting erect." So, they decided to use rabbits in a study of "whiplash." As the Tulane researchers noted, "The original objective in this experiment was to produce injuries sufficiently severe to cause death and possibly decapitation in the test animals." Yet, this goal proved difficult. When the first test instrument proved insufficient for the challenge, the researchers went through a Goldilocks-like process of designing more and more severe testing equipment, three or four in all, trying to find one that was "just right." But the researchers never obtained their objective. 

Over thirty years ago, Langdon Winner published his famous article, the title of which asked, "Do Artifacts Have Politics?" He answered definitively, "yes." In his most famous example, Winner recounts a story from Robert Caro's biography of the cityplanner Robert Moses. Moses, Caro argues, was a racist, and he was determined to keep blacks and the poor off his newly designed parkways and beaches on Long Island, New York. Since impoverished minorities primarily used public buses for transportation, Moses thwarted their hopes of reaching his beaches by designing the bridges on his parkways to be too low for buses. Thus, Winner claims that that Moses's bridges have a built-in politics; they were "a way of settling an issue in a particular community." The "politics" of Winner's bridges are social facts, however. They are a part of the meaning of things in a given society, at a given time. If a future society or an alien culture with different social structures came to inhabit the city, they could still use Moses's bridges, but it is hard to see how the bridges would still be racist bridges. 

A rabbit undergoing a "whiplash" simulation at Tulane. In their article based on these studies, the Tulane researchers concluded, probably unhelpfully, "The historical premise that the neck of the rabbit is fragile appears to be in er…

A rabbit undergoing a "whiplash" simulation at Tulane. In their article based on these studies, the Tulane researchers concluded, probably unhelpfully, "The historical premise that the neck of the rabbit is fragile appears to be in error."

Similarly, via safety standards, these experiments in biomechanics may adhere to our things, though they are not a part of a technology's physical essence. They aren't a part of the thing itself, what academics might call its ontology. It is closer to a "hauntology," one of Jacques Derrida's punny neologisms meant to denote a form of existence that is neither present, nor absent. These animals and cadavers do haunt our things; we carry them with us, even if they are no longer present. Their destruction has "shaped" our technology. Scholars probably draw on the metaphor of shaping too often and too vaguely, but in the case of auto standards and animal experiments, shaping is quite literal. For instance, automakers design dashboard crumple zones with federal safety criteria in mind, and if the Wayne State studies played some role in setting federal safety standards, then crushed dog skulls played some part in the shape of dashboards. QED. These destroyed animals are with us, even if we are not aware.  

The Wayne Curve, a graphical representation that aggregated studies on human cadavers and living non-human animals (dogs), which came to depict the amount of force that the average human being could withstand. The Wayne Curve played a central part i…

The Wayne Curve, a graphical representation that aggregated studies on human cadavers and living non-human animals (dogs), which came to depict the amount of force that the average human being could withstand. The Wayne Curve played a central part in the first automotive safety standards in the United States. 

These ideas seem to question how we should relate to the things around us. When we reach for a plastic container of shampoo in the shower, should we see a wall of rabbits’ eyes staring back at us? Because the eyes of rabbits were the most common “instruments” used for the Draize toxicity test for cosmetics and other substances. When we look out at a field of helmeted high school football players, should we see the mangled bodies of biomechanical test subjects hovering over the field like some inverted guardian angels? Because impact biomechanics has greatly influenced protective clothing for athletics. What would it entail if we did see things in this way? Would it even matter?

One answer to these questions would be that, if we truly believe that the biomechanical experiments on non-human animals were immoral, we should somehow purify our knowledge and practices of them. This notion recalls debates around the use of knowledge that Nazi scientists gained by experimenting of victims of the Shoah, an analogy I do not wish to belabor.

When members of a committee of the Society of Automotive Engineers became interested in how seatbelts affected pregnant women, the Federal Aviation Agency and researchers at Holloman Air Force Base conducted a series of experiments on pregnant baboo…

When members of a committee of the Society of Automotive Engineers became interested in how seatbelts affected pregnant women, the Federal Aviation Agency and researchers at Holloman Air Force Base conducted a series of experiments on pregnant baboons. Because researchers were not able to find enough pregnant baboons for their study, researchers implanted, in three non-pregnant baboons, a "simulated uterus," which "consisted of a rubber balloon enclosed in nylon netting and [containing] a transducer to measure pressure changes during acceleration." The researchers then put the baboons through series of decelerations, which were meant to mimic the experience of a "typical passenger in a hypothetical Boeing 720 type airliner crashing on takeoff." All of the mothers and fetuses died, some only after twenty hours of suffering.

In the late 1980s and early 1990s, debates arose over whether data from Nazi experiments should be published or used, with some claiming that the data should be set aside altogether and using it amounted to "harming the victims anew."[2]  Some solutions put forward for resolving the controversy—such as gaining the consent of victims or their families before using the data—simply do not fit cases involving non-human animals. Yet, on a broader level, such arguments raise the question of whether knowledge, once gained, can be purified, set aside, purposely forgotten. If we stopped using the published results, should we also set aside any study that cited or relied on these publications? Should we start over from scratch? But some theories in the philosophy of science claim that we cannot test hypotheses in isolation; we require auxiliary, or background assumptions.[3] If this thesis is correct, could we guarantee that our assumptions were not somehow handed down from these earlier studies? Could we assure that our received wisdom was somehow pure of this stain? Central myths in the Western tradition, including the Biblical Fall and the story of Prometheus, suggest that knowledge cannot be unlearned or at least not easily forgotten.

A graphical representation of some of the major biomechanical findings for the seatbelt experiments on pregnant baboons.

A graphical representation of some of the major biomechanical findings for the seatbelt experiments on pregnant baboons.

Instead, perhaps learning to live with such knowledge is a part of living with time, of being in and of history, and perhaps this knowledge requires us to live in a different relationship with things. What is this relationship? I do not know, but its (possible) invitation troubles and unnerves me. The “Theses on the Philosophy of History” contains one of Walter Benjamin most famous passages:

“There is a painting by Klee called Angelus Novus. An angel is depicted there who looks as though he were about to distance himself from something that he is staring at. His eyes are opened wide, his mouth stands open and his wings are outstretched. The Angel of History must look just so. His face is turned towards the past. Where we see the appearance of a chain of events, he sees one single catastrophe, which unceasingly piles rubble on top of rubble and hurls it before his feet . . . .  A storm is blowing from Paradise, it has caught itself up in his wings and is so strong that the Angel can no longer close them. The storm drives him irresistibly into the future, to which his back is turned, while the rubble-heap before him grows sky-high. That which we call progress, is this storm.”[4]

Yes, and my research suggests this: that the storm called progress is true even of its seeming opposite, that destruction is present even in its inversion—what we call “safety.”

 

 

[1] All quotations in this paragraph are from E. S. Gurdjian and H. R. Lissner, “Mechanism of Head Injury as Studied by the Cathode Ray Oscilloscope Preliminary Report,” Journal of Neurosurgery 1, no. 6 (1944), 393–399, esp. 393.

[1] Aaron Gross, “Introduction and Overview: Animal Others and Animal Studies” in Animals and Human Imagination: A Companion to Animal Studies, ed. Aaron Gross and Anne Vallely (New York: Columbia University Press, 2012), 1.

[2] See, for instance, Stephen G. Post, “The Echo of Nuremberg: Nazi Data and Ethics,” Journal of Medical Ethics 17 (1991), 42–44, esp. 43.

[3] I’m thinking of the Duhem-Quine Thesis.

[4] My copy of Benjamin’s Illuminations is currently living with my wife in Missouri. I take this quotation from the translation of Benjamin’s “Theses” available here: http://www.marxists.org/reference/archive/benjamin/1940/history.htm

The Zombie Scale of Classroom Participation

I've used this scale to great effect in my courses for the last few years, but this is first time I am sharing it publicly. 

The Zombie Scale of Classroom Participation

4 points = Congratulations! You are a healthy adult human being! And as a responsible adult, you have prepared and are now making quality in-class contributions.

3 points = Hmm. You may have prepared, but your contributions are just OK and don't demonstrate any deep understanding of the material. Perhaps you are just having an off day, OR (!) perhaps you have been bitten and now have the zombie plague! It's hard to say . . . 

2 points = It's fairly clear you've been bitten now. You have the creeping zombie crud. Most times, you sit silently, becoming gray and developing the zombie shake. Sometimes you may talk in class, but what you say is off topic, displays no sense that you read the material, or is pure BS. (Of course, I mean BS solely in the technical sense. See Frankfurt, "On Bullshit" [1986].)  Every now and then you emit strange, small sounds, somewhere between a wheeze and a snore. 

1 points = No signs of human life remain. Your body may be here, but your mind isn't. If any thought is present, it is for checking your cell phone. 

0 points = Unexcused Absence. You have become so zombified you are not even here. In all likelihood, you are feasting on someone's liver in the cafeteria.

One Reason Why I Do What I Do: A Historian's Thanksgiving

Thanksgiving.

A couple of nights ago, I was walking my dog, Baron, around my neighborhood in Maplewood, New Jersey, and some memories came rushing at me. I could do nothing but stop in my path and take notice. 

I must have been eight or nine at the time. My growing family lived on Glenwood Avenue in the rustbelt town of Joliet, Illinois, in a 2.5 bedroom house that I think of as somehow interstitial between our landing (from Ohio) in a poor renter's place in neighboring Rockdale and my parent's first home purchase on Joliet's Prairie Street. It was must have been 1987 or '88. 

On the night of this memory, my dad took me and my brother to pick up some groceries at the old Jewel grocery store, which those-in-the-know will remember lay closer to the corner of Jefferson and Larkin than the current one.

I really don't know what we went there for. In fact, I have I have zero memory of the early minutes of our time there that day or walking through the aisles with my dad and my brother, picking up groceries. I remember nothing of our travels until we went through the check out lane and were on our way out.  

Then the memory comes, forcing itself upon me. 

A disheveled man often stood near the exit of the store. I remember him from my family's earlier, frequent visits to Jewel. The man frightened me. He was gaunt, too thin for his clothes, and dirty. He looked like he smelled (in my memory), though I don't recall any odor, and he acted odd. He carried little booklets around, but until that time, I never knew what was in them. I guess I later thought of the man as "homeless," though many mentally ill people in Joliet who would have been homeless in other places, in fact, lived with family members or in neighborhood-based institutions.

After my father paid the clerk, we went to leave, and instead of hustling us past this homeless man—who haunted me, like a ghost, during that period—dad started drawing us toward him. 

"Hi," dad said. The man shuffled and returned some uncomfortable, incoherent greeting. He spoke under his breath to the store's exterior window. Skipping no beat, dad (his name is Lance) replied, "I was wondering if you could show us some of your coins." And the man again responded incoherently but opened these booklets I had noticed before. 

The books were full of grubby but antique coins, and when the man opened them, the pages shined with magic. I so wish that I could recall for you what the man said, but I was too young to recollect his words now. And yet I remember that he narrated every coin on every page. He knew their histories, their origins, their worth. The man was insane—perhaps schizophrenic—but he was a numismatist who knew the coin collector's craft. And he displayed that knowledge for us on the badly painted bench near the door, which old mothers usually sat on as they waited for their rides. 

My family had little extra money at that time, and my father explained that he could not buy any coins. After he listened for some minutes, dad said to man, "The other night you yelled at me, and I didn't know what I did, but I wanted to make sure that everything was alright between us." Again, the man shuffled. I remember his eyes being wet, though I cannot be sure. But I do remember the look of recognition the man gave my father: at least this one has seen me.

We left soon thereafter. I do not know if my brother and I asked any questions, or if my father recognized the shock in our faces, but I remember my dad saying to us as he walked us to the car, "I am sorry if that upset you, but that man followed me out to the parking lot the other day screaming, and I wanted to ensure that I had not harmed him."

I don't really know what my dad was thinking that night. He's the most private man you could ever imagine. Even his kids don't know him, really. My guess is, if you had asked him at that time what he was doing, he would have pointed to how Jesus acted in the Gospels. Or maybe he would have gestured towards his family or his upbringing in the hill-town of Zanesville, Ohio, where no one is beneath mention. Or maybe my dad was trying to teach my brother and me a lesson. I have absolutely no idea. 

But I do know the effect his act had . . . because I can never forget it whenever I am looking another person in the eye. 

My father taught me that every human being is worth our consideration and attention, no matter what state they are in. As a teacher, I try to carry out that lesson every day. As a historian, I apply it even to the dead. As a social scientist, what interests me most in life are the gaps between us, which keep us from comprehending each other. I work hard to overcome these gaps of understanding wherever I can, even though I know that understanding doesn't always mean agreement. 

I cannot express the gratitude I feel for the path that my father—perhaps unwittingly—set for me. So, let this be my Thanksgiving. 

With love. 

Remembering Klepper

This week I am headed to an event at Carnegie Mellon University titled "Celebrating the Work of Steven Klepper." (A PDF flyer for the event is here.)

Steven Klepper, 1949-2013

Steven Klepper, 1949-2013

Klepper, who earned his PhD in economics from Cornell in 1975, was a member of CMU's Department of Social and Decision Sciences. He died too young, robbing of us of the chance to learn more from his insightful, often surprising way of working out problems, a personal method that came to be known as "Kleppernomics." To fulfill his quest of better understanding the dynamics of technology, business, and "innovation," he did the hard work of building fascinating databases. (One, if I recall correctly, used catalogs for lasers to track changes in industry organization over time.) Klepper was a happy model-builder, but his use of math had a graceful, parsimonious quality often lacking in today's mainstream economics.

He also had an abiding love of history and approached many of his topics in a historical, evolutionary fashion. Two of his most highly-cited pieces are "Entry, Exit, Growth, and Innovation over the Product Life Cycle" and, with Wes Cohen, "A Reprise on Size and R&D" (both behind paywalls). Klepper was deeply interested in questions of industrial dynamics, like how industries emerge and shakeout to form oligopolies, but he tied these issues to others that deeply interest me, such as production of knowledge, where new technological ideas come from, and how ideas and technologies diffuse. 

In addition to being an incisive thinker, Klepper was a passionate mentor and program builder. He played a crucial role in developing the CCC Doctoral Conference, an important institution in the training of PhD's in strategy, technology management, innovation, etc. At CMU, he helped found the program known as SETCHANGE (which I believe originally stood for Structure, Entrepreneurship, and Technical Change). My favorite memories of Klepper are from SETCHANGE seminars. His mind and questions were always probing but in ways that aimed to support, help, and calm the individuals being questioned. And he had a wonderful sense of humor. I know that many people miss him. 

I believe that a Festschrift-like special issue on Klepper's thought is coming out soon in a major journal (perhaps Industrial and Corporate Change), and there is rumor that a textbook Klepper was working on will eventually be released to introduce his thinking to the wider world.

As I have argued elsewhere, I believe that, if we can more actively bring together contemporary trends in the history of science and technology, organizational theory, and other fields with the kinds of Schumpeterian and evolutionary economics that Klepper embodied, we can take the (historical) study of technology into whole new domains. 

@EvgenyMorozov @NewYorker #ShowUsTheMissingPages

We may be getting closer to understanding how a book review in The New Yorker that was written by Evgeny Morozov but based heavily on Eden Medina's Cybernetic Revolutionaries reached its final, unethical form.

In my last blog post, I introduced the cast of characters, laid out what we know of the facts, and presented an argument for why Morozov and The New Yorker should issue an apology and corrections. I am not going to rehash those details here. Rather, I want to move the story forward and, in light of new information, strengthen my demand for action on Morozov's and The New Yorker's parts. 

Morozov responded to my first post via Twitter. Unfortunately, it was the kind of response that we have grown to expect from him. 

Historians who have met Morozov verify that he knows Eno, but this response largely does not address the central point: giving due recognition to Eden Medina's (again, award-winning, not merely "entertaining") book, Cybernetic Revolutionaries.

As is becoming routine in this case, USC Annenberg School PhD student Meryl Alper put it best.

After the Eno Tweet, Morozov largely went silent on Twitter. He has not responded in any formal way to demands that he better recognize Medina's book, and he certainly hasn't publicly apologized. To my knowledge, The New Yorker has not responded either. 

The void left by Morozov and The New Yorker has been filled in part by Janet Browne, the Chair of Harvard University's History of Science Department, where Morozov is a doctoral student. Browne has recently become interested in the history of computing, and so, last week, she joined the mailing list of SIGCIS, the world's foremost organization of computer historians. Along with Twitter, the SIGCIS mailing list has been one of the main places where people have been discussing the Medina-Morozov affair and the ethical lapses in Morozov's New Yorker essay. 

On Saturday, Browne sent out an email to the SIGCIS mailing list titled "historians and journalists." (I will post a link to Browne's email here once it becomes available in the SIGCIS archives, which are here.) Browne's email gives us further evidence that something is awry in the Medina-Morozov case. 

We should have some sympathy for Browne's position. (In addition, to being a scholar of the highest caliber—her works on Darwin are some of the best on the planet—Browne is also by all accounts a good, kind, upstanding person.) I think that Browne was especially worried that some people on the SIGCIS mailing list had accused Morozov of plagiarism, a word that I have not used, as I discussed in my last post. In her email, Browne writes, "I would like to clarify that the history of science department at Harvard is completely scrupulous—exacting—in what it expects from its faculty, graduates, and students working in any media." I don't know anyone who has doubted this. 

To Browne's credit, it is also important to point out that she is trying to heal a wound in the community of inquiry, known as the history of science and technology, and she has emphasized the quality and importance of Medina's work. Browne writes, "We in the close-knit community along Massachusetts Avenue in Cambridge are united in applauding the excellence of Eden's book Cybernetic Revolutionaries: Technology and Politics in Allende’s Chile. The book won the Computer History Museum book prize in 2013 and the 2012 Edelstein Prize from SHOT. It is well-written and very engaging. I hope that SHOT readers will remain confident in the proper practices of our fellow scholars and continue to share their work with the larger public."  

But it is when Browne turns to the Morozov case that problems arise. First, she notes that, after the controversy emerged, Morozov immediately went to his "editors to say that one sector of" his "readership was unhappy with the way the narrative unfolded." Fair enough. Then Browne points to Morozov's Tumblr explanation of his research, an explanation that I criticized in my last post. 

But it's Browne's next sentence that sets off red flags. She writes, "I can confirm that several pages of [Morozov's essay], including comments on Eden Medina's book, were cut for space reasons." (I will assume in the rest of the post that Browne's statement here is true.)

What is in these missing pages? Of course, we don't know. That's the point. 

Here's one possibility: if these pages and their mentions of Medina had been included in the final essay, perhaps the piece would have adhered—not to academic norms of citation—but to The New Yorker's own style and tradition of citation and reference in reviews. As Nathan Ensmenger has pointed out, in one Critic at Large piece, George Packer mentioned the author he was reviewing 38 times. In another Critic at Large essay, Rick Perlstein reviews two authors, mentioning them a total of twelve times. A Louis Menand Critic at Large essay, which I cited as evidence that Morozov simply wasn't living up to the magazine's own expectations of attribution, mentions the author under review, Evelyn Barish, 20 times.

If the missing pages from Morozov's essay would have brought it in line with The New Yorker's generous-enough citation style, then in comparing the earlier version and the published version we would witness the erasure of Eden Medina from the very story that she worked so hard to tell. How does this look? Bad. Very bad.

As I wrote in my last post: On Twitter, Meryl Alper pointed out that there is an additional irony: Medina's "work highlights power imbalances in knowledge production and circulation." The Medina-Morozov affair is a story of power. A famous male tech critic ensconced in the world's premiere university wrote an essay in one of the most important periodicals in American letters. The essay drew heavily on the work of a less well-known, though award-winning, female scholar. As the historian of computing Thomas Haigh put it, Morozov's essay "spent about twenty paragraphs on the story of the Chilean Cybersyn network . . . closely recapping Medina's argument and evidence." Yet, Morozov mentioned Medina only once in a way that did not make it clear that the whole drew in important ways on her fundamental work. He obscured her influence.

Morozov and Janet Browne have both brought up the fact that The New Yorker essay was heavily fact-checked. But no one has doubted the facts in Medina's book (to my knowledge). The question is whether the facts (and the analysis) in Morozov's piece were all his or if they belong in significant part to Medina's labors. It is a confusion that could have been cleared up immediately through early and proper citation.

Apparently, in an earlier draft, Morozov felt a need to mention Medina's work more often ("comments" plural says Browne), but then he and the editors decided take Medina out, leaving many readers with the impression that Morozov claimed all the thought as his. Why? It's not at all clear. Space? You don't violate ethical norms for space. It's no excuse. 

I agree with several scholars who believe that Evgeny Morozov and The New Yorker should release the earlier, Medina-containing version so that readers can examine for themselves how these edits were made. If The New Yorker and Morozov are confident that the published essay is as ethical as the earlier draft, they should have no reservations about sharing the earlier version. If you are right, sunshine and transparency will prove you so. The earlier version should be posted on a blog or otherwise put up in a form that the public can inspect. 

In other words, @EvgenyMorozov @NewYorker #ShowUsTheMissingPages

There are other reasons to go through this exercise of comparing versions. The Medina-Morozov situation (and the Mills-Nasser situation, which I discussed in my last post) should be a moment to pause, to reflect on the relationship between academic norms and journalistic ones. It is a time for discussions, many of which have already begun. There are more to be had. If The New Yorker and Morozov can take the brave step of publishing both versions, they will be providing a great service to the world of writing.

Ultimately, however, examining the different versions will not justify the essay's current unethical form. As one historian wrote to me, "In the end, the author is given space and uses it as he or she sees fit." Another historian, when I suggested that the difference between versions might explain something, wrote "Oh puleeze. It's no excuse. If he was responsible, he would have insisted on putting her name elsewhere." It's true. Morozov seems to have had no qualms about writing Medina out—whether that idea was his or The New Yorker's. 

I do not want to vilify Evgeny Morozov. Morozov has pissed off a lot of people in his day, and I have slowly watched on Twitter as what I have written has played into their agendas. That saddens me. I agree with Morozov's politics. I have used and will continue to use his writings in my classes. I admire a great deal of his work.

But in publishing his essay in this form, he has done something wrong. (I have received dozens of messages from historians and other scholars who agree.) Morozov—perhaps because of his disposition—seems incapable of realizing that he has messed up, even if by accident. When will someone force him to take responsibility? 

The deeper people dig, the worse the situation looks.

Let's be exacting: the time has come for Evgeny Morozov and The New Yorker to act. Issue corrections and apologies now.

 

PS: Morozov posted yet another explanation to his Tumblr, but I do not believe that it deals with the core arguments in any new way. In fact, it appears that he may not understand the argument. And his post certainly doesn't explain the erasure of Medina between versions. Still, for due diligence, I should link to it: here.

 

 

An Unresolved Issue: Evgeny Morozov, The New Yorker, and the Perils of "Highbrow Journalism"

Last week, The New Yorker published its October 13 issue. It contained an "A Critic at Large" piece by Evgeny Morozov, titled "The Planning Machine: Project Cybersyn and the Origins of the Big Data Nation."

Famed Tech Critic and History of Science Doctoral Student Evgeny Morozov

Famed Tech Critic and History of Science Doctoral Student Evgeny Morozov

Within a few days, historians were chatting. Something was wrong. Morozov's essay clearly borrowed heavily from Eden Medina's book, Cybernetic Revolutionaries: Technology and Politics in Allende's Chile, a book that every reader should buy right now. Medina, who received her PhD from MIT, is an associate professor of Informatics and Computing at Indiana University and also co-editor of the volume, Beyond Imported Magic: Essays on Science, Technology, and Society in Latin America

Indeed, Morozov's essay was ostensibly a review of Cybernetic Revolutionaries. Yet, Morozov only once mentioned Medina, and the mention came well into his text. To add insult to injury, citation was glancing at best: "As Eden Medina shows in 'Cybernetic Revolutionaries,' her entertaining history of Project Cybersyn, [Stafford] Beer set out to solve an acute dilemma that Allende faced." The placement of the mention as well as its wording could and did give many readers the impression that all of the ideas and the work that went into the essay were Morozov's, but they weren't. 

Eden Medina, Associate Professor at Indiana University

Eden Medina, Associate Professor at Indiana University

Historians of technology, especially experts in computer history, and other scholars were angry. They took to Twitter and other social media platforms to draw attention to the situation and shame Morozov for his behavior. On the mailing list of SIGCIS, the world's foremost organization of computer historians, members hashed out the ethical lapses of Morozov's essay. Talk on the SIGCIS list became increasingly heated. On Twitter, Meryl Alper, a PhD candidate in Communication at USC's Annenberg School, pointed out that there was an additional irony: Medina's "work highlights power imbalances in knowledge production and circulation." In the Medina-Morozov situation, we have a well-known tech critic (Morozov) and a powerful periodical (The New Yorker) borrowing heavily from a young, female professor's work without due recognition. Don't mind her. She's merely "entertaining."

At some point during the week, Janet Browne, a professor in Harvard University's History of Science department (where Morozov is currently a graduate student), wrote the executive committee of SIGCIS. She asked its leaders to remove two posts from its "blog" that alleged plagiarism on Morozov's part. Furthermore, she claimed that the issue was "now resolved," that no one had found evidence of plagiarism, and that the paucity of citations to Medina's work was fitting with the genre of "highbrow journalism."

The mailing list isn't a blog, so there was nothing to be done there, but the issue of plagiarism is a difficult and murky one. I have not alleged that Morozov plagiarized, and I have had questions for anyone who has made that claim. But plagiarism has several definitions. The most narrow definition focuses only on the direct borrowing of language. I haven't seen anyone claim that Morozov's essay did that. Yet broader definitions of plagiarism include borrowing from an author's argument and research without proper attribution, and it is understandable that some people feel that the Medina-Morozov affair is a case of plagiarism (even if we ultimately believe that such feelings are misplaced). 

An Image that Scholars Passed around on Twitter in the Context of the Medina-Morozov Affair

An Image that Scholars Passed around on Twitter in the Context of the Medina-Morozov Affair

(I've been pleased to learn that the above image on plagiarism was put out by the Poynter Institute. You can find the original post, which contains several other insights, here.)

More troubling to me is the claim that this situation is "now resolved." It isn't. (After word of Janet Browne's communication was shared with the SIGCIS membership, one historian sent out an email to the list titled, "Nothing to See Here, Please Move Along . . . ") And I do not believe that we can invent new genres, like "highbrow journalism," to wiggle our way out of traditional ethical norms around writing. It is this issue that I want to focus on in this post because it is a real problem and it has every appearance of becoming a worse one. 

At least since I entered graduate school in 2005, there has been increasing talk of and pressure for historians (and likely other academics) to write for mainstream publications and communicate via other popular media. For too long, the thinking goes, academics have been writing only for each other. It's time to reach out, to share our thinking with the general public. Blogs, Twitter, podcasts—so many tools have become a necessary part of the engaged academic's arsenal. And there is amazing stuff being done in the popular arena in the history of science and technology (HOST). NPR's Radiolab is probably the most famous case. But great pieces on HOST are also appearing at the Guardian, Slate, The Atlantic, and, yes, even The New Yorker

But there's a question: what should historians produce for pop outlets? Academic historical works take years to research and write. You can pop one or two aspects of your research and maybe even write a book for a trade press, but in the end, if you want to produce regularly for popular venues, you are going to have to draw from other sources. Under such conditions, there is going to be a temptation to lean heavily on other people's work and present syntheses thereof.

Today, so much "news," online and elsewhere, is just rewritten postings of stories that were originally written elsewhere by others. In the first episode of the new television series, Gracepoint, a journalist character complains to her boss, "All I'm doing right now is polishing press releases." We live in a world of recycling. It would be sad if historians are willing to join others in forgetting ethical standards, especially ones about others' work and thoughts. It is a trend worth resisting. 

Even before the Medina-Morozov case, we have already seen cases where pop writers have borrowed too heavily from academic historians. In September, Latif Nasser, who received his doctorate from the Harvard History of Science program, published a piece titled "Helen Keller and the Glove that Couldn't Hear" at The Atlantic. The article recounted the fascinating story of a visit at MIT between Helen Keller and Norbert Wiener, the "father of cybernetics."  It's a good story, and Nasser's telling of it is clear and enticing. The problem was that the first version of it published did not make clear that it was almost wholly a retelling of Mara Mills' essay, "On Disability and Cybernetics: Helen Keller, Norbert Wiener, and the Hearing Glove," which was published in the journal Differences. Mills also got her PhD from the Harvard History of Science program and is now an assistant professor of Media, Culture, and Communication at NYU. As her webpage states, she is "completing a book (On the Phone: Deafness and Communication Engineering) on the significance of phonetics and deaf education to the emergence of 'communication engineer' in early twentieth-century telephony; this concept and set of practices later gave rise to information theory, digital coding, and cybernetics." (A list of her publication is here.)

One thing that separates the Mills-Nasser situation from the Medina-Morozov one, however, is that, as soon The Atlantic realized there was a problem, it responded.

It's worth pointing out, though, that some believe that there was still a real issue at hand.

The fact that Nasser and the editors of The Atlantic would use a scholar's research so cavalierly is troubling. Far more troubling is the response that so far has come from Morozov (hostility) and The New Yorker (to my knowledge, silence). 

When scholars began questioning Morozov's essay on Twitter, Morozov went on the attack. He told Nathan Ensmenger, an influential computer historian and Medina's colleague as a professor at Indiana University, "As I said, you simply don't know what you are talking about." When Ensmenger suggested that maybe he (Ensmenger) needs to learn how to read a book review, Morozov responded, "Yeah, you do, actually."

Morozov's self-defense has been that The New Yorker's reviews, including "A Critic at Large" pieces, do not mention the authors and books that they are reviewing with great frequency. (In other words, a single mention gets the job done.) As Ensmenger and others have pointed out, though, a quick survey of New Yorker reviews, including "A Critic at Large" reviews, shows frequent mentions and, more important, early mentions of the works under review.

Moreover, Morozov wrote on Twitter "The main reason to mention the author more than once in that format is if you are arguing with them." Bull. See, for instance, Louis Menand's "The De Man Case," which is also "A Critic At Large" book review, in the March 24, 2014 issue of The New Yorker. Menand mentions the author he is reviewing, Evelyn Barish, early and often and in all kinds of circumstances. For example, "From what Barish found, it seems that this was wishful thinking." (Menand would have been a good person for Morozov to look to in writing his piece since Menand writes for the magazine and is a renowned scholar, as Morozov aspires to be.)

In response to people questioning his sourcing of Medina, Morozov put up a post on his Tumblr, titled "Some notes on my cybernetic socialism essay."  He spent most of the piece describing all of the work he had done, which is utterly beside the point when it comes to proper attribution.

 Morozov posted this picture to Twitter as evidence that he had done research

 

Morozov posted this picture to Twitter as evidence that he had done research

Yet, Morozov also admitted in his Tumblr post, "But it's a book review essay, and I do mention the book under review."

On his blog, Morozov writes, "It's probably not obvious to people who haven't read Medina's book AND all the materials that I've read but: I'm not actually drawing on her book when I'm summarizing quite a few things in my piece." True enough. But notice that, by definition, this makes Morozov nearly the only person on the planet who can judge when he was borrowing from Medina and when he was coming up with his own material. The fact is that his telling of the story was simply too close for scholars to tell it apart from Medina's. That's a problem, a problem avoided through citation.

Other historians have also suggested that Morozov drew on works that he did not cite at all, including Andrew Pickering's The Cybernetic Brain, which recounts the correspondence between Brian Eno and Stafford Beer, something that Morozov writes about in his essay. When the historian of computing, David C. Brock, a Senior Research Fellow at the Chemical Heritage Foundation, pushed Morozov on this point. Morozov replied, "Pickering's book was duly read when it came out and it failed to impress. Actually, I found it awful." Here, Morozov repeats a mistake that he also made in his Tumblr post, when he wrote, "Am I absolutely happy with Medina's book? No. In fact, I even have minor quibbles with it." We don't cite other authors because we agree or disagree with them but because the hard work they have done has taught us something. 

Moreover, we cite because it often becomes unclear what are our ideas and what are the ideas of those we have read. Here Morozov is not reassuring. After Morozov put up his Tumblr post explaining his essay, a fan asked him a question.

Almost every study I have ever seen shows that we tend to overestimate the accuracy of our memory (just as we overestimate our ability to "multitask"). Morozov asks concerned readers to trust his blessings. 

I have some sympathy for Nasser and Morozov. I am hard at work on my first piece of historical writing for a popular magazine. I know how hard it is to keep things tight, get the flow right, and avoid weighing the text down with academic bullshit. But I also know how my piece draws on others' research. I will cite them. If an editor would not let me give credit where credit is due, I would walk away. At least I hope I would. 

On Twitter, the historian Patrick McCray, a professor at University of California—Santa Barbara, began discussing the Medina-Morozov affair with the hashtag #faust. (About a year ago, McCray wrote an interesting blog post on Morozov, which included reflections on Morozov's relationship to academic norms.) #Faust is right. We live in a world full of intense pressures, and writers sometimes face Faustian bargains. What will we choose?

The fact remains that no matter how much archival research Evgeny Morozov did, his essay drew heavily on Eden Medina's fine and award-winning book, Cybernetic Revolutionaries, and he did not make that at all clear. 

READ THIS (AWARD-WINNING) BOOK!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

READ THIS (AWARD-WINNING) BOOK!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

If what I have said above makes sense—and I would be more than happy to hear that I am wrong—this situation is not resolved. Any suggestion that it is resolved might be interpreted (perhaps misinterpreted) as an attempt to quash this controversy. This situation is unresolved, but there are straightforward steps to resolve it

• Evgeny Morozov must apologize. Publicly. It doesn't matter if he didn't intend to do anything wrong. He did. His essay failed to properly acknowledge its sources. The placement and wording of his mention of Medina give readers a false impression. Morozov should make his sources clear in a written statement and confess his wrongs. His apology must be public because the damages that result from these kinds of violations go beyond the personal to the level of communities of inquiry and, ultimately, to the level of creative humanity. He also needs to drop his defensive, arrogant, and hostile attitude. Morozov's Twitter handle is "There are useful idiots. Look around." How might that worldview have contributed to this situation? Did he find Medina useful?

The New Yorker has not responded in any formal way. It must. This situation is partly a result of faulty editing at the magazine. The online version of Morozov's essay should be edited with the proper notations that changes were made because of this ethical problem. The magazine should issue an apology and correction in print. 

• Academics should begin a process of discernment about their relationship with journalism. We must consider what norms will guide us no matter where we are working. Some historians have said that they are going to teach the Medina-Morozov situation in their classes as a case of ethical violations. A few have even suggested that they will teach it as a plagiarism case. One historian claimed that we must go further and think about how we will handle our graduate students if they break such ethical codes. 

To begin with, however, we scholars must speak out when we see these kinds of violations happening. The Medina-Morozov situation scares some people. I had a friend say, maybe partly in jest, that he didn't want to speak up because he is "terrified of the Harvard Mafia." Others have said that they have no desire to upset editors at The New Yorker. (Oh, dreams of publishing in The New Yorker, you dissipate with each passing word.)

But we have to stand up for each other. If we don't, who will?