“I want tools . . . tools that kill!” my client told me. He was not a defense contractor or a representative of the U.S. military; he was a senior manager at a corporation that had nothing to do with producing weaponry. Fresh out of my doctoral program, all spiffed up in new shoes and sport coat, I had arrived for my first big consulting project at a Fortune 100 company. Confused, and with some trepidation, I asked him, “What do you mean, ‘tools that kill’?”

“I want,” he said, “techniques to make my engineers more creative and innovative. Can you deliver that?”

I was really a babe in the woods. Looking back, I recognize that his demand was my initiation into the tradition of management science. I was encountering two salient characteristics of that tradition: the free use of inflated and dramatic rhetoric—“tools that kill”—and the assumption that the way to solve problems is with a technical, by-the-numbers approach.

Since the early 20th century, management science has been on a quest for its version of the Holy Grail—that quick fix or formula that will deliver increased productivity and social harmony. Its history tells of the rise and fall of corporate training programs, each introduced with great fanfare, each pronouncing its novelty and uniqueness, and each claiming scientific legitimacy. From quality circles and reengineering to employee empowerment—these interventions are consistently presented not only as aids to achieving certain ends but also as a revolution or new paradigm for the workplace.

The rise of corporate mindfulness programs can be seen as the most recent addition to this history. In fact, the very way it is branded—as “scientific” and as a “revolution”—speaks less of the uniqueness or validity of those claims than of how completely characteristic they are of the field. With the dizzying pace of change, managers are stressed and given to feeling that they are falling behind the curve. They are now, and have long been, vulnerable to new trends that make big promises.

Frederick Winslow Taylor was perhaps the first management guru. In his 1911 book, The Principles of Scientific Management, Taylor described his proposal as a “mental attitude” to be adopted. The classic time and motion studies discussed in Taylor’s book deskilled work, dictating the “one best method” to organize production. Among other things, he was fond of proclaiming that scientific management could transform recalcitrant immigrant laborers into “first-class men,” who would cooperate with management. Not surprisingly, Taylor’s methods, with their enormous appeal, brought him fame and fortune.

But how good was the science of Taylor’s scientific management? Not very, as it turned out. Taylor never submitted his methods to the scrutiny of the scientific method. In fact, he was accused of fudging his data, lying to clients, and inflating his track record of success. Eventually he had to answer to a special House Committee, which subsequently banned his stopwatch methods in government facilities.

Having worked on Taylor-style assembly lines in my youth, I experienced the alienation of boring, repetitive, soul-destroying work firsthand. In my senior year at college, I read textbook accounts of the psychiatrist Elton Mayo and the famous “Hawthorne studies,” which he carried out at a factory outside Chicago. Productivity increases at the plant were attributed to a friendlier supervisory style that Mayo introduced to improve group morale. Textbooks of the time often juxtaposed Mayo and Taylor, depicting Mayo as a heroic figure who humanized the damage inflicted by Taylor’s scientific management. The human relations movement, which grew out of Mayo’s work, trained supervisors to involve workers in decision making, claiming this would reduce hostility and resistance toward management.

For many years I believed this narrative, which had largely replaced Taylor’s as the dominant one. It wasn’t until I examined the archival data from the Hawthorne studies that I discovered Mayo had omitted from his scientific reports any mention of workers’ grievances regarding their working conditions, pay, and fatigue. Such complaints—especially those coming from women—were summarily dismissed by Mayo as mere “emotional reactions” that were not to be taken seriously. Richard Gillespie, in his book Manufacturing Knowledge, notes “a persistent tendency in Mayo’s work to transform any challenge by workers of managerial control into evidence of psychiatric disturbance.” Throughout Mayo’s writings workers were described as irrational, pathological, and lacking in self-control. But Mayo never produced any evidence to advance the scientific validity of these claims.

By the 1950s, nearly every major American company had mandated human relations training for supervisors and foremen. But many social scientists have since called into question the credibility of Mayo’s scientific objectivity. As William Davies points out in The Happiness Industry, “there is some basis to doubt whether Mayo was really reporting on data acquired at Hawthorne or simply repackaging some theories that he’d long held about the future of capitalism.”

In the 1970s, the Quality of Work Life (QWL) movement swept through North America, Europe, and Australia, where social scientists became change agents, advocating with missionary zeal for the redesign of production systems to fit human needs and motivations. In Scandinavia this morphed into the industrial democracy movement, with companies like Volvo leading the way. Early in my professional career, I myself was deeply involved in the QWL movement as both a consultant and a researcher. However, over the course of time, I watched companies shirk their commitment to such programs. Managers’ unwillingness to share power with workers constantly frustrated me. Despite the fact that some QWL manufacturing plants were hugely successful, there was no widespread attempt to emulate them. The progressive managers who spearheaded these innovative plants were marginalized—a career-limiting move that solicited derision, scorn, and even hostility from their peers and superiors. Rather than embracing QWL innovations that improved the bottom line, corporate management either vigorously opposed them or simply allowed them to fade out.

In the next decade, the ideas of Thomas (Tom) Peters and Robert Waterman became an overnight sensation with the publication of their international bestseller In Search of Excellence, and its authors would become multimillionaires. The book examined high-performing companies to uncover the reasons for their success. A flamboyant, evangelical character, Peters was infamous for his tirades and rants, browbeating and publicly shaming managers to shed their risk-averse habits. Peters prescribed eight success “fundamentals,” such as “staying close to the customer,” “a bias for action,” and “stick to the knitting.” In an interview with Fast Company magazine, Peters admitted 20 years later to faking his data. Subsequent analyses found that there was really nothing special about many of the “great” companies in the book. A critique published in the Harvard Business Review in 2009 summed it up: “We’ve come to the rather disturbing conclusion that every one of the studies that we’ve investigated in detail is subject to a fundamental, irremediable flaw that leaves us with no good scientific reason to have any confidence in their findings.”

By the 1990s, corporations were embracing “reengineering,” a euphemism for ruthless downsizing and restructuring. Reengineering was essentially scientific management on steroids, tapping into the power of new information technology to streamline bloated corporations. Droves of freshly minted MBAs became reengineering consultants overnight, carrying forth to Fortune 500 companies such slogans as “Don’t automate, obliterate” and  “Carry the wounded but shoot the stragglers.” In the aftermath of the reengineering craze, corporations looked like war zones, with the once sacrosanct white-collar class devastated. While such programs were immensely popular and profitable (representing a $51 billion industry by 1995), evaluation studies showed that 67 percent of reengineering programs were judged as producing marginal, mediocre, or failed results.

Today, as companies report over $300 billion in annual losses due to stress-related absences, corporate mindfulness is being sold as a new form of “mental capital.” The tech magazine Wired captures this move by likening mindfulness to “the new caffeine, the fuel that allegedly unlocks productivity and creative bursts.” Just when neuroscientists are exuberantly celebrating the brain’s plasticity, corporate mindfulness programs are being promoted as revolutionary interventions that will help employees be more flexible, emotionally intelligent, and resilient in the face of stress.

Maybe it’s true. Maybe corporate mindfulness training is, finally, the true revolution in management, and it will deliver on its promises where its predecessors have failed. But for now, that is sheer speculation. What is not speculation is what history demonstrates: that corporate managers have invested an enormous amount of time and effort on programs that have been marketed as scientific and revolutionary, only to see them fade and fall out of favor.

The dubious claims made by each new trend in industrial psychology and management science can be seen as a persistent habit through time. If the Buddhist practice of mindfulness teaches us anything, it is that habits are hard to identify and break. Today, we should be extra vigilant when they are accompanied by triumphant oratory. We would do well, I think, to be mindful of the rhetoric of mindfulness.

Thank you for subscribing to Tricycle! As a nonprofit, to keep Buddhist teachings and practices widely available.

This article is only for Subscribers!

Subscribe now to read this article and get immediate access to everything else.

Subscribe Now

Already a subscriber? .