What's Really Behind The Facebook Psyche Experiment Controversy

Over the past few days, Internet users have done what they do best: rage over some controversy. In this case, the controversy hinges on paper a published in the Proceedings of the National Academy of Science by three researchers: one of whom works at the University of California, San Francisco, one at Cornell University, and one at Facebook. In the paper, the researchers report on a study conducted on 690,000 Facebook users in early 2012, in which some users were shown more negative content in their news feeds and others were shown more positive content. The researchers wanted to test the theory that emotions are contagious, and they found that, indeed, users who were shown more negative content were more likely to make negative posts. (Here's the paper.)

Most of the squabbling over this study has focused on the question of its morality, and as with all squabbles, there are several different positions on the matter. Some Facebooks clearly feel violated by this study. They think it's creepy.  Others argue that users shouldn't feel violated, or at least if users want to feel violated, they should realize that Internet and social media companies are doing this kind of thing all the time. Web and social media firms often conduct what are called "A/B tests," in which one group of users is shown one thing and another group is shown a second thing, to improve user experience and—more important—to, like, GET MORE CLICKS. In other words, this second group looks at the Facebook experiment and says, "Same as it ever was." Some go even further and say that not only are experiments like this not new but also that the findings aren't all that new. As Katherine Sledge Moore, a professor of psychology at Elmhurst College told the BBC, "The results are not even that alarming or exciting."

A slightly deeper level of questioning has focused on the academic research ethics of the paper. Academic psychological experiments require consent on the part of the research subjects and (to my knowledge) some form of debriefing, especially when the subjects are put through a negative experience. Facebook claims that users had already consented to the research done in the study when they agreed to company's Terms of Service (ToS), the long document we all click through when signing up for services like Facebook, which hardly anyone ever reads but which, it turns out, contains a clause saying that the firm can utilize user information for research. But it's not as if Facebook users agreed to this particular study, and (again, to my knowledge) academic research ethics generally does not accept the kind of standing consent entailed in the ToS. Given this seeming misalignment between academic research ethics and Facebook's ToS, it's unclear how how this study made it past the research standards of the journal or the internal review boards (IRBs) at the two universities. We can be sure that it did make its way past such ethics filters, however. Perhaps the decisions to OK this study will now become controversial at the journal or the two universities, but I doubt it. We shall see.

In this post, I would like to focus on a slightly different issue than any of the ones I have just mentioned: how the researcher at Facebook fits into the history of corporate R&D and what this might have to do with anxieties about the study.

One question has continuously hovered in the background of this controversy, why would Facebook bother publishing anything in an academic journal? Well, perhaps the company is seeking some new form of scholarly legitimacy or cultural capital. Maybe. But does a multibillion dollar enterprise like Facebook really care about being taken seriously by academics? I doubt it. Perhaps Facebook believes that the research it is doing can benefit the world. I doubt that's the motivation either. 

I think that the answer may lie elsewhere. 

In the past two decades, companies, such as Microsoft, Yahoo!, Facebook, and Twitter, have opened research divisions. What do researchers at these places study? Well, many things, but one of the things they study is us. Whereas the traditional R&D labs of the 20th Century, at places like Bell Telephone and Telegraph and DuPont, focused on physics and chemistry and the other sciences that could produce new materials and technologies to increase the companies' bottom-line, the R&D groups at these digital technology firms focus on human behavior. We are their material. We are the way to increase their bottom-line. Especially our clicking and swiping fingers. Consequently, these new Web 2.0 R&D labs have been hiring academically-trained, often PhD-bearing social scientists, including social psychologists, sociologists, anthropologists, and scholars of communications studies. And you can imagine that these social scientists are making $BANK$, at least compared with what they would make at a university.

Companies should always be wary of hiring academics, however. We come with baggage. This was even more true when the first R&D labs were formed in the early 20th century. Scientists who went to work for such labs very often faced being blacklisted and struck from the registers of institutional science, never to work in academia again. Corporations offered more money, but scientists also worried about their prestige. To attract researchers, the corporations created academic settings within them, including libraries, seminars, and other trappings of the scholarly life. Still, the scientists wanted more: they wanted to publish, the primary means of securing an academic reputation. Corporate executives had mixed feelings about publication, and often placed limits on what went out, but they did go along with some publishing. 

While the social scientists at Web 2.0 firms are enriching themselves (again, at least in comparison with university compensation), they likely also want to see themselves as legitimate social scientists and to have the status and prestige that comes from publishing and performing in front of one's peers. I imagine that this desire at least partly answers the question, why would Facebook bother publishing anything in an academic journal? It did so because its people want to do so, and it needs to keep its people happy, or they will leave.

Viewed in this light, part of the controversy surrounding the Facebook study arises from blurring the boundary between academic social science and corporate strategy. My guess is that this boundary and the moral and political issues that surround it will only get messier with time. More problems are a comin'. I could go on about this forever, but I will limit myself to a few points. 

First, as more or these types of studies are published and researchers increasingly try to blur the line between Web 2.0 market research and academia, I think we will hear more questions about how trustworthy the published results are. Physics done by corporate researchers is still physics (though perhaps academic physicists critical of their corporate peers didn't always think so). Social science, however, often has a short shelf-life and its results are seemingly more open to manipulation for political or economic ends. (My friends and colleagues will justly quibble with this fast and dirty distinction between the natural and social sciences, but there it is.) As social media firms publish more research, we may see people call its credibility into question. 

Indeed, I think we've already seen cases of this questioning. One example is danah boyd's book, It's Complicated: The Social Lives of Networked Teens. boyd received her PhD from the UC Berkeley School of Information, and she works at Microsoft Research. In It's Complicated, boyd argues that teens' social media use is simply an extension of ordinary adolescent sociality, and that parents should spend less time worrying about such activity. Furthermore, she argues that parental "paternalism and protectionism hinder teenagers' ability to become informed, thoughtful, and engaged citizens through their online interactions." A PBS program probably put it best when it wrote that, according to boyd, "the kids are all right."

I know several academics who refuse to take boyd seriously because of where she works. You mean to say that someone employed at Microsoft Research is telling us that social media and digital technologies are nothing to worry about? A colleague likes to remind us, "A conflict of interest simply is the appearance of a conflict of interest. The appearance is enough." Others I know—those of a Marxist bent and those prone to conspiracies theories—go further, asserting that the only reason we are hearing about boyd in the first place is because she serves the interests of the very media that are informing us about her. 

I actually like boyd's work. After teaching Computers & Society, a college course that examines the history, politics, and morality of computers and digital technologies, four times, I think that boyd's account generally accords with how my students see their digital technology and social media use. In fact, I plan to use portions of It's Complicated in that course when I teach it again this fall. Yet, when we talk about the book in class, I will be sure to describe its background and have a discussion with the students about how the writer's position may have biased her assertions or at least biased our reception of them.

Second, beyond these issues of epistemology and trust, it's an open question whether Web 2.0 social science researchers at corporate labs will be able to avoid moral judgement by their peers, harkening back to the time when scientists were blacklisted for working in corporate R&D labs. In popular culture, people in marketing are seen as . . . well, something less than human. (Sleazoid pond scum?) Given that, in the end, Web 2.0 companies are advertising businesses, it is unclear whether social scientists at the firms will be able to avoid marketing's social stain—at least when it comes to the judgment of their peers. 

Third, some people are already worrying that "Big Data" operations are "sucking scientific talent into big business," as my colleague and buddy, the science writer, John Horgan put it in one of his blog posts.  A corollary concern might be that Web 2.0 firms will come to have an outsized influence on the direction of the social sciences connected to the companies' interests. We know how the Cold War influenced American science. Will 21st century click-baiting and P.R.E.A.M. culture (that's Page-views Rule Everything Around Me) affect the course of social science? 

Finally, an issue that has already caused controversy with papers coming out of these companies: the firms might be willing to allow, or even encourage, their researchers to publish and give academic presentations based on their work, but they will be less likely to answer in the affirmative when someone asks, "Can I see your data?" The most valuable property of these firms is their information, and they will not sacrifice their core proprietary secrets for the sake of academic openness. In this way, the social scientific research produced at these corporations will violate the so-called Mertonian Norms of Science, including the norm of "communalism," the belief that scientific results (and the data they are based on) should be the "common ownership" of the scientific community. 

Part of the furor surrounding the Facebook experiment has to do with the way it blurred the boundaries between corporate marketing research and academic social science. There's a good chance that it'll only get blurrier. We'll see how the different communities involved manage this interruption. 

"Elevator Shafts and Rocket Sleds," Fashion Institute of Technology, April 8, 2014

I'm really looking forward to discussing my paper, "Elevator Shafts and Rocket Sleds, or How the Military-Academic-Industrial Complex Built Ralph Nader's Ammunition," at the NYC Market Cultures group at 6 PM on Tuesday, April 8, 2014. The discussion will take place at the Fashion Institute of Technology. Thanks to the Market Cultures gang for inviting and hosting me. I've copied the details below. Please come!

Col. John Stapp takes a lot of G's on one of his rocket sleds.

Col. John Stapp takes a lot of G's on one of his rocket sleds.

Tuesday, April 8, 6pm

LEE VINSEL, Stevens Institute of Technology: "Elevator Shafts and Rocket Sleds, or How the Military-Academic-Industrial Complex Built Ralph Nader's Ammunition."

Military doctors strapped themselves to rocket sleds, which sped at 600 miles per hour through deserts of the US Southwest. Professors in Detroit dropped cadavers down elevators and smashed hammers on drugged dogs' skulls. And the knowledge they produced was used to found a consumer safety revolution.

Seminar at Fashion Institute of Technology, eighth floor alcove, Dubinsky Building. Enter at the FIT Feldman (C) Lobby on the north side of 27th Street, halfway between Seventh and Eighth Avenues. Someone will escort you to the room.

Please email bea_saludo@fitnyc.edu to RSVP and receive a precirculated draft of the paper.  Please read the paper before attending the seminar.

Stapp flies through the desert on one of the Sonic Wind Rocket Sleds.

Stapp flies through the desert on one of the Sonic Wind Rocket Sleds.


Chris Christie's #Bridgegate in Historical Perspective

In September 2013, the Port Authority of New York and New Jersey closed down two of three lanes of traffic to do maintenance on the road leading to the George Washington Bridge, causing an enormous traffic jam, reportedly the worst since 9/11. Soon after NJ Governor Chris Christie was re-elected, Democrats began to allege that the lane closures were politically-motivated, that members of the Christie administration had closed the lanes to get back at Fort Lee Mayor Mark Sokolich, a Democrat who wasn't a "Christiecrat" and instead backed Christie's opponent in the election. Last week, emails were released showing high-level staff members within Christie's administration ordered the lane closures, if Christie didn't himself, and Christie fired several of them, including his Deputy Chief of Staff Bridget Anne Kelly.

So far, discussions of what has become known as "Bridgegate" have focused on whether Christie knew about—or even ordered—the lane closures and whether it will damage his aspirations to become the President of the United States. But I'd like to offer a slightly different take by placing Christie's actions in historical context. I'm first going to consider the broader (nasty) politics of bridges and roads in the New Jersey and New York area before looping around to talk about where Christie fits in all of this.

My fascination with the politics of New Jersey roadways began not with the George Washington Bridge but with a different bridge altogether, and it began late last summer, when my wife and I moved to the New Jersey town of Maplewood. When I needed to drive into my work in Hoboken, I would head down I-78 until it connected to U.S. Route 1-9, which takes you right over the Pulaski Skyway. 

792px-Pulaski_Skyway_over_Kearny.jpg

Sometimes I see the bridge as the Eiffel Tower turned on its side and stretched over three and a half miles. For all of its decrepitude and disrepair, it's a beautiful thing.

Pulaski1_72.jpg

But my fondness for the industrial hulk took a turn when I was stuck in traffic on it one day. (The bridge is reportedly one of the least dependable roads in the United States, traffic-wise.) I looked up and saw a plaque on one of the Skyway's steel beams. On it, I could make out the name, H. Otto Wittpenn. Minutes earlier, while working at home on my book on the history of auto regulation in the United States, I had been reading a document from the 1920s from a national conference on highway safety, in which Wittpenn was a participant.

Look up when driving past this plaque on the Pulaski Skyway and say, "Hey, Otto Wittpenn. How's it going, man?"

Look up when driving past this plaque on the Pulaski Skyway and say, "Hey, Otto Wittpenn. How's it going, man?"

I understood where Wittpenn fit within the context of the history of auto safety politics, but what were the politics of the Pulaski Skyway?

(I realize that this is not the kind of thing most people think about. I'm a nerd. What can I tell you?)

Luckily, I didn't have to wait long to find out some answers. A couple quick searches and I'd found Steven Hart's The Last Three Miles, which examines the fraught construction of this stretch of road.

the_last_three_miles.large.jpg

When it was constructed, the Pulaski Skyway was a symbol of a changing world. Hart opens The Last Three Miles with an epigraph, a quotation from an engineer, D. P. Krynine, who in 1931 wrote, "The construction of Route 25 [Pulaski Skyway] in New Jersey is an introduction into the transportation system of a new kind of link that is something between 'highway' and 'railway.' This new member of the transportation family may be called 'superhighway.'" The roadway marked a transition from a period when the great engineering achievements were dedicated to the railway to a time dominated by the automobile.

(New Jersey was also, in 1929, the birthplace of the first cloverleaf highway interchange, so it was a real leader in the automotive field during this period.)

I don't want to give too much about Hart's story away. A book with a subtitle that includes the words "politics" and "murder" deserves to be read as the thriller it is. But I'll say a little. It turns out that Wittpenn was the mayor of Jersey City, New Jersey, from 1908 to 1913, when he lost an election to Frank Hague, the infamous political boss who ruled Jersey City until 1947. By the 1920s, Wittpenn was a member of the New Jersey State Highway Commission, which is why his name is on the Pulaski Skyway. But it was Hague for whom the bridge was a real headache. Known as a friend to labor unions during the early part of his mayorship (and a friend to organized crime throughout it), Hague turned violently against unions when labor representatives tried to organize the bridge's construction. Skirmishes between the roving gangs for and against unionization eventually led to the murder of Hart's subtitle. The struggle became known as "the war of the meadows" in local lore. (The bridge goes through part of New Jersey's famed meadowlands.)

Although the Pulaski Bridge was an icon of modernity and technological achievement, it had to fit, literally, into the local politics. The politics of the Pulaski Bridge were the nasty kind that many people associate with the Tammany Hall-esque political machines of the late-19th and early-20th century, but which those of us in New Jersey just call "Hudson County Politics" or associate with Newark, birthplace of Mr. Chris Christie. Indeed, this observation about technologies and localities is a general finding of science and  technology studies: all technologies, even ones as seemingly general and uniform as roads and bridges, are entrenched in their local contexts.

Of course, the Pulaski Skyway and the George Washington Bridge are not the only stretches of road that have been or are political. On some level, they all are. We've all heard of "The Bridge to Nowhere," and highways were a classic tool for breaking up poor, mostly black, urban neighborhoods during the heyday of "urban renewal" in the 1950s and 1960s. In nearly all of the classes I teach, I have students read Langdon Winner's essay, "Do Artifacts Have Politics?" In that essay, Winner famously recalls a story involving Robert Moses, the urban planner and "Master Builder" who designed many of the public parks and roadways in the New York City metro region. According to Moses's biographer Robert Caro, when Moses was designing Jones Beach in Long Island, he wanted to keep poor people and racial minorities off of the gorgeous sandy shores he was envisioning. So, because poor people tended not to own cars but rather to take public transportation, Moses designed the bridges of the Wantagh Parkway, which leads to the beach, to be so low that buses could not pass through them.

A low, bus-forbidding bridge on the Wantagh Parkway

A low, bus-forbidding bridge on the Wantagh Parkway

Some people have questioned the truth of this interpretation of Moses's parkway bridges, but as a fable, Winner's account of Moses's bridges has stuck, and the lesson of that fable is this: people sometimes build politics into things, so that the politics become implicit, even invisible. So . . . with all of this in mind, we can say that roads and bridges are often political; they can be racist; they are sometimes examples of graft and corruption and pork; they can be zones of controversy and contention.

This brings us finally to the GWB, the George Washington Bridge, to Chris Christie's Bridgegate, or what progressives have jokingly taken to calling Bridgeghazi.

350px-GW_Bridge.jpg

Interestingly, all of the stories that I have gestured towards so far in this post—whether its about the politics of the Pulaski Skyway, or the tearing up of black neighborhoods to "renew" cities, or Robert Moses's racist bridges—are focused on the politics of building roads and bridges. What I had not considered before the dawning of Bridgegate is how the maintaining of roads and bridges might be political. But, like, duh!

First of all, we should recognize that the de facto politics of maintenance in the United States is that we don't do it.  In 2013, the American Society of Civil Engineers (ASCE) gave US infrastructure a D+ in its periodically-updated report card. Hey, there's some good news. Solid waste treatment got a B- and bridges got a C+, though many remain troubled. Indeed, the Pulaski Bridge is graded as "structurally deficient," and controversial repairs are set to begin on it soon after the Super Bowl is played here in New Jersey. But ASCE gave many American infrastructural systems, including roads, a D. With the culturally-suicidal Tea Party and other anti-government groups still holding some power, we can imagine that our infrastructure will only further erode.

Still, beyond these de facto politics, who would have thought of using road maintenance as a weapon? Well, New Jersey politicians, of course. I wouldn't be surprised to learn that this road-closure-via-maintenance routine has been a political dirty trick forever in these parts. Just as the politics of road and bridge construction can become invisible because the objects become everyday and mundane to us and we forget that, for instance, there used to be a slum where the road now is, road maintenance can be an effective weapon because it seems so very ordinary . . . except for when it doesn't seem that way, as in the case of the GWB.

That these lane closures amount to a politics of technology is nicely captured in this graphic from the New Yorker, I think:

chris-christie-control-room.gif

Remember, all politics (of technologies) are local. So what are the local politics of the George Washington Bridge? According to many, many accounts, Chris Christie is and always has been a bully who has fostered a culture of bullying in his administration, including the use of dirty tricks. If these accounts are even close to being right, we don't need him to be our president, both because we don't need more bullies, like Lyndon Baines Johnson, or more leaders prone to dirty tricks, like Tricky Dick Nixon. While there is lots of fun to be had with Bridgegate, its also very serious, deadly serious if reports are true of 91-year old Florence Genova dying because her ambulance was delayed by the traffic.

Some have portrayed Christie's Bridgegate as a result of ego-mania:

An image Tweeted (perhaps re-Tweeted) by @AmyHamnerWalker with the message, "I guess #ChrisChristie got too big for his bridges."

An image Tweeted (perhaps re-Tweeted) by @AmyHamnerWalker with the message, "I guess #ChrisChristie got too big for his bridges."

But perhaps it is better to remember him this way, as the cover of the newest New Yorker casts him, as an irresponsible child who held up the world when he didn't get his way:

CVS_TNY_01_20_14Blitt_v2.FMweb.jpg

Why Carmakers Always Insisted on Male Crash-Test Dummies

About a year and a half ago, I was lucky to get a post up on Echoes, a cool business history blog at Bloomberg.com. Sadly, Echoes is defunct now. I still like my piece, though, so I am re-posting it here.

A Crash-Test Dummy Family 

A Crash-Test Dummy Family

 

Beginning with 2011 model-year vehicles, federal regulators have required automakers to use petite female crash dummies in frontal automotive crash tests.

Several news outlets have picked this story up. Yet they haven’t recognized that it’s part of a longer struggle in business-government relations -- and the product of a long-held cultural resistance to considering gender differences in design.

Safety advocates and automakers have tussled for almost 50 years over which human forms to consider in safety standards. Throughout the 1950s and early ’60s, a new safety movement in the U.S. -- made up of consumer advocates, epidemiologists, engineers and others -- argued that government regulation should focus on redesigning cars, not just training and policing drivers.

The most famous of these advocates was Ralph Nader, whose 1965 book, “Unsafe at Any Speed,” did a great deal to focus federal attention on the issue. The efforts of the safety movement eventually culminated in the enactment of the National Traffic and Motor Vehicle Safety Act of 1966, which created the first nationwide automotive safety standards.

The enactment of the law was only the first step, however. Regulators had to get proposed safety standards through the hurdles of federal rulemaking procedures, which allow for public comment and revision. The auto companies fought the rules at each step.

Second Collision

One rule they resisted fiercely was Standard 201, known as “Interior Occupant Protection.” Automakers found this standard galling because, of all the new rules, it most fully embodied a philosophy that Nader and other safety advocates embraced called the “second collision.” The advocates argued that drivers and passengers weren’t injured in the first collision between the car and, say, a tree. Rather, they were hurt during the second collision when their bodies struck the interior of the vehicle. Therefore, the advocates’ goal was to make the interior of the car safer, especially by padding certain key surfaces.

Although the automakers didn’t disagree with the theory, they vociferously opposed the design changes advocates believed should follow.

Complying with Standard 201 required a process like this: An anthropomorphic mannequin, or dummy, was placed in a car. Those conducting the tests would bend the dummy forward and identify where its head might strike the interior of the vehicle. Next, the testers would remove materials that the head might hit, such as the dashboard, from the car, and mount them on a wall. Then, they would use a large pendulum to slam a “head form” into the material. Standard 201 limited the number of G-forces that the head form could experience during the collision.

The automakers disliked many things about Standard 201, including that it required dummies of two different sizes. The two dummies were meant to encapsulate the wide variety of human forms. In an accident, a very tall person’s head would strike different surfaces than a short person’s head would. The dummies were to represent a 95th percentile male (that is, only 5 percent of the male population would be larger than the dummy) and a 5th percentile female (only 5 percent of the female population would be smaller). The 95th percentile specimen had been around since the 1949 creation of Sierra Sam, the first crash-test dummy, which was developed under a contract with the Air Force.

Push-Back

But the automakers pushed back strongly on the need to create a 5th percentile female dummy. No such dummy existed, the industry argued. It would take too long to develop one, and the payoff would be unclear. Although marketers had begun to account for the tastes of women as potential consumers well before the 1960s, many automakers claimed that considering women’s health in engineering was too radical.

Federal regulators were beaten back during the revision of Standard 201 in 1967. They lost out on many criteria and rules that embodied the philosophy of the second collision, and Standard 201 swung strongly in the industry’s favor. One of the few places that regulators won out, however, was the different-sized dummies. The 95th percentile male and 5th percentile female dummies became part of the federal code. The automakers would have to account for women in engineering the car.

Yet battles can be un-won. Regulators may have succeeded in getting female dummies into Standard 201, but such dummies would soon fall by the wayside.

In 1973, “crash-test dummies” became a mandated part of safety standards for the first time. These were the devices we now think of when we hear the term: mannequins belted into test cars that are crashed into various objects. These dummies were used in frontal crash tests, not just for identifying potential impact points. The mandated dummy, known as Hybrid II, was a 50th percentile male -- in other words, the average American guy. Perhaps because the dummy was so complex and costly, regulators and automakers moved from using two extreme sizes to using one model.

Women’s Safety

But a female dummy didn’t become a mandatory part of frontal crash tests until last year. For all this time, the average American guy stood for us all.

That may have had a substantial impact on women’s auto safety. If airbags are designed for the average male, they will strike most men in the upper chest, creating a cushion for their bodies and heads. Yet small women might hit the airbag chin first, snapping their heads back, potentially leading to serious neck and spinal injuries.

In some cases, according to tests with female mannequins, small women were almost three times as likely as their average male counterparts to be seriously injured or killed. A study of actual crashes by the University of Virginia’s Center for Applied Biomechanics found that women wearing seatbelts were 47 percent more likely to be seriously injured than males in similar accidents.

Time and again, we are reminded that women still struggle to receive equal pay and treatment in the work place. Women haven’t received equal treatment in the design of the cars they drive, either -- and it may have had deadly results.

Searching for the Limits of Innovation Speak

In a blog post at Forbes.com, Chunka Mui, author of the recently released book, The New Killer Apps, offers up a friendly critique of my last post here on the job-killing potentials of autonomous (or self-driving) cars and other cyberphysical systems, like drones.  Mui's main point is that my warning about autonomous vehicles undervalues the human toll of automobiles, including deaths and injuries. He would rather see lives saved by adopting autonomous cars than see jobs saved by not adopting them.

In this response post, I want to say, first, that Mui misunderstands me. On the point of adopting autonomous cars as consumer goods, he and I are closer than he might imagine. In fact, I could easily see, within the next twenty years, becoming an advocate for mandating the use of self-driving systems on all cars for reasons of safety, emissions control, and fuel economy.

But, second, I also want to claim that there is a logical incoherence in Mui's larger argument and that, on a deeper level, it is his fondness for certain kinds of buzzwords, like killer apps and disrupting technologies, that has led him astray. Adopting autonomous cars as consumer goods does not necessarily, or inevitably, entail that we must also use the technology to kill jobs. We need to remember that.

Ultimately, however, Mui's overvaluing, if not outright worshipping, of killer apps and disruption and such is a symptom of his relationship to the notion of innovation. (Mui manages to use the word innovation and its near relatives eight times in the short piece.) So, in the third and final part of my response, I'll go one step further and argue that our culture has a real innovation speak problem. We need to find new ways of thinking and talking.

Read More

Autonomous Vehicles and the Labor Question

Last week, the New Yorker published an article on the rise of autonomous, or self-driving, vehicles written by Burkhard Bilger. My friends, who know I'm writing a book on auto history, filled my inbox with links to it. It's a great piece. It's well-written and snappy, and it avoids many of the cliches that haunt most writing on self-driving cars. I first learned about Bilger's article on Twitter from a Tweet that said something like "The Inevitable Rise of the Self-Driving Car," which got my hackles up in a severe way. Nothing is inevitable, except death. But Bilger doesn't make an inevitability argument (damn you, Twitter!). Indeed, he spends a good bit of time exploring the many things that could go wrong that would keep autonomous cars from being legal or from being widely adopted. (More on the role risk plays in technology adoption in a post on the Tesla battery fires coming here soon.)

Still, Bilger's article was narrow in a few unfortunate ways. 

Read More

The Auto Fiend as Icon

In my first post on this blog, I examined a forgotten automotive risk, the possibility of gas headlamp fires. If you want to understand how people tried to control dangers, I argued, you must understand their worries. I also believe that, very often, you must understand how historical actors, who were trying to decrease hazards, thought about other people. In an article I'm writing, I examine four successive efforts to regulate the automobile, from 1900 to 2010, and how each effort contained a different picture of human being, or at least human psychology.

During the automobile's first few decades, perhaps no image of risky automobile drivers dominated American culture as much as the Auto Fiend. But understanding who the Auto Fiend was and what he (and the Auto Fiend was always a he) meant requires some unpacking.

Read More

An Untold Irony in NHTSA's Recent Rearview Camera Debacle

A little over a month ago, news outlets reported on a controversy involving the National Highway Traffic Safety Administration (NHTSA). The news reports captured most of the essential facts: In 2008, the US Congress passed the Cameron Gulbransen Kids Transportation Safety Act. The law directed NHTSA, among other things, to revise its safety standards to "expand the required field" of rearward vision. The practical effect of this part of the law would be to require rearview cameras or similar technologies be installed in every new car sold in the United States so that drivers would back over fewer children who are often too short to be seen through traditional rearview mirrors.

The law ordered NHTSA to create the new rule by 2011. But NHTSA hasn't done it yet, meaning that the rule is more than two and half years late. In September of this year, NHTSA announced not that it was going to (finally) pass the rule but, instead, that it would make rearview cameras a part of its New Car Assessment Program, which recommends safety technologies in the hopes that consumers will buy them voluntarily.

In response to this announcement, Greg Gulbransen (Cameron's father), Susan Auriemma, and four consumer advocacy groups (Kids and Cars Inc., Advocates for Highway and Auto Safety, Consumers Union, and Public Citizen) filed a lawsuit against NHTSA, arguing that the agency has failed to enforce the law. These are the bare facts, and as I said, news outlets did a good job of reporting them, but they missed a deeper irony, which I will reflect on here.   

Read More

Uncovering Forgotten Risks, Pt. 1: Fire

Last summer, I was a fellow at the Lemelson Center for the Study of Invention and Innovation, a part of the Smithsonian Institution's National Museum of American History. While I was there, I studied automotive risks—mostly focused on safety—from the period of 1900 to 1940. I was lucky to have the chance to explore the Smithsonian's off-site collection of antique cars during my stay. The collection sits in an anonymous looking warehouse in suburban Maryland, and it houses everything from an original 1893 Duryea Brothers horseless carriage to a 1948 Tucker Sedan, from one of Mario Andretti's race cars to one of General Motors' infamous EV-1 electric cars (of Who Killed the Electric Car? fame).

I was particularly examining the evolution of safety technologies on early automobiles, and I mean safety technologies in the most basic sense, not airbags and crash avoidance systems, but headlights, brakes, mirrors, and other features that we might not even consider to be safety features any longer. They have become part of the basic notion of what a car is.  

Every once in a while, when doing this kind of research, you uncover some long forgotten hazard, something that may have worried people in earlier times but that would never occur to us today. While I was tracking the evolution of headlights—from gas to acetylene, from acetylene to electric—in the Smithsonian's antique car collection, I came across a 1901 Pierce Motorette.

Read More

Beginnings

This blog is about the history of the automobile and its relationship to regulatory control. I will mostly be using this space to examine sources and stories that I have discovered that will not play a central role in my book.