The military funding of science has had a powerful transformative effect on the practice and products of scientific research since the early 20th century. Particularly since World War I, advanced science-based technologies have been viewed as essential elements of a successful military.
World War I is often called "the chemists' war", both for the extensive use of poison gas and the importance of nitrates and advanced high explosives. Poison gas, beginning in 1915 with chlorine from the powerful German dye industry, was used extensively by the Germans and the British ; over the course of the war, scientists on both sides raced to develop more and more potent chemicals and devise countermeasures against the newest enemy gases.[1] Physicists also contributed to the war effort, developing wireless communication technologies and sound-based methods of detecting U-boats, resulting in the first tenuous long-term connections between academic science and the military.[2]
World War II marked a massive increase in the military funding of science, particularly physics. In addition to the Manhattan Project and the resulting atomic bomb, British and American work on radar was widespread and ultimately highly influential in the course of the war; radar enabled detection of enemy ships and aircraft, as well as the radar-based proximity fuze. Mathematical cryptography, meteorology, and rocket science were also central to the war effort, with military-funded wartime advances having a significant long-term effect on each discipline. The technologies employed at the end—jet aircraft, radar and proximity fuzes, and the atomic bomb—were radically different from pre-war technology; military leaders came to view continued advances in technology as the critical element for success in future wars. The advent of the Cold War solidified the links between military institutions and academic science, particularly in the United States and the Soviet Union, so that even during a period of nominal peace military funding continued to expand. Funding spread to the social sciences as well as the natural sciences. Emerging fields such as digital computing, were born of military patronage. Following the end of the Cold War and the dissolution of the Soviet Union, military funding of science has decreased substantially, but much of the American military-scientific complex remains in place.
The sheer scale of military funding for science since World War II has instigated a large body of historical literature analyzing the effects of that funding, especially for American science. Since Paul Forman's 1987 article "Behind quantum electronics: National security as a basis for physical research in the United States, 1940-1960," there has been an ongoing historical debate over precisely how and to what extent military funding affected the course of scientific research and discovery.[3] Forman and others have argued that military funding fundamentally redirected science—particularly physics—toward applied research, and that military technologies predominantly formed the basis for subsequent research even in areas of basic science; ultimately the very culture and ideals of science were colored by extensive collaboration between scientists and military planners. An alternate view has been presented by Daniel Kevles, that while military funding provided many new opportunities for scientists and dramatically expanded the scope of physical research, scientists by-and-large retained their intellectual autonomy.
While there were numerous instances of military support for scientific work before the 20th century, these were typically isolated instances; knowledge gained from technology was generally far more important for the development of science than scientific knowledge was to technological innovation.[4] Thermodynamics, for example, is a science partly born from military technology: one of the many sources of the first law of thermodynamics was Count Rumford's observation of the heat produced by boring cannon barrels.[5] Mathematics was important in the development of the Greek catapult and other weapons,[6] but analysis of ballistics was also important for the development of mathematics, while Galileo tried to promote the telescope as a military instrument to the military-minded Republic of Venice before turning it to the skies while seeking the patronage of the Medici court in Florence.[7] In general, craft-based innovation, disconnected from the formal systems of science, was the key to military technology well into the 19th century.
Even craft-based military technologies were not generally produced by military funding. Instead, craftsmen and inventors developed weapons and military tools independently and actively sought the interest of military patrons afterward.[8] Following the rise of engineering as a profession in the 18th century, governments and military leaders did try to harness the methods of both science and engineering for more specific ends, but frequently without success. In the decades leading up to the French Revolution , French artillery officers were often trained as engineers, and military leaders from this mathematical tradition attempted to transform the process of weapons manufacture from a craft-based enterprise to an organized and standardized system based on engineering principles and interchangeable parts (pre-dating the work of Eli Whitney in the U.S.). During the Revolution, even natural scientists participated directly, attempting to create “weapons more powerful than any we possess” to aid the cause of the new French Republic, though there were no means for the revolutionary army to fund such work.[9] Each of these efforts, however, was ultimately unsuccessful in producing militarily useful results. A slightly different outcome came from the longitude prize of the 18th century, offered by the British government for an accurate method of determining a ship's longitude at sea (essential for the safe navigation of the powerful British navy): intended to promote—and financially reward—a scientific solution, it was instead won by a scientific outsider, the clockmaker John Harrison.[10] However, the naval utility of astronomy did help increase the number of capable astronomers and focus research on developing more powerful and versatile instruments.
Through the 19th century, science and technology grew closer together, particularly through electrical and acoustic inventions and the corresponding mathematical theories. The late 19th and early 20th centuries witnessed a trend toward military mechanization, with the advent of repeating rifles with smokeless powder, long-range artillery, high explosives, machine guns, and mechanized transport along with telegraphic and later wireless battlefield communication. Still, independent inventors, scientists and engineers were largely responsible for these drastic changes in military technology (with the exception of the development of battleships, which could only have been created through organized large-scale effort).[11]
World War I marked the first large-scale mobilization of science for military purposes. Prior to the war, the American military ran a few small laboratories as well as the Bureau of Standards, but independent inventors and industrial firms predominated.[12] Similarly in Europe, military-directed scientific research and development was minimal. The powerful new technologies that led to trench warfare, however, reversed the traditional advantage of fast-moving offensive tactics; fortified positions supported by machine guns and artillery resulted in high attrition but strategic stalemate. Militaries turned to scientists and engineers for even newer technologies, but the introduction of tanks and aircraft had only a marginal impact; the use of poison gas made a tremendous psychological impact, but decisively favored neither side. The war ultimately turned on maintaining adequate supplies of materials, a problem also addressed by military-funded science—and, through the international chemical industry, closely related to the advent of chemical warfare.
The Germans introduced gas as a weapon in part because naval blockades limited their supply of nitrate for explosives, while the massive German dye industry could easily produce chlorine and organic chemicals in large amounts. Industrial capacity was completely mobilized for war, and Fritz Haber and other industrial scientists were eager to contribute to the German cause; soon they were closely integrated into the military hierarchy as they tested the most effective ways of producing and delivering weaponized chemicals. Though the initial impetus for gas warfare came from outside the military, further developments in chemical weapon technology might be considered military-funded, considering the blurring of lines between industry and nation in Germany.[13]
Following the first chlorine attack by the Germans in May 1915, the British quickly moved to recruit scientists for developing their own gas weapons. Gas research escalated on both sides, with chlorine followed by phosgene, a variety of tear gases, and mustard gas. A wide array of research was conducted on the physiological effects of other gases, such and hydrogen cyanide, arsenic compounds, and a host of complex organic chemicals. The British built from scratch what became an expansive research facility at Porton Down, which remains a significant military research institution into the 21st century. Unlike many earlier military-funded scientific ventures, the research at Porton Down did not stop when the war ended or an immediate goal was achieved. In fact, every effort was made to create an attractive research environment for top scientists, and chemical weapons development continued apace—though in secret—through the interwar years and into World War II. German military-backed gas warfare research did not resume until the Nazi era, following the 1936 discovery of tabun, the first nerve agent, through industrial insecticide research.
In the United States, the established tradition of engineering was explicitly competing with the rising discipline of physics for World War I military largess. A host of inventors, led by Thomas Edison and his newly created Naval Consulting Board, cranked out thousands of inventions to solve military problems and aid the war effort, while academic scientists worked through the National Research Council (NRC) led by Robert Millikan. Submarine detection was the most important problem that both the physicists and inventors hoped to solve, as German U-boats were decimating the crucial naval supply lines from the U.S. to England. Edison's Board produced very few useful innovations, but NRC research resulted in a moderately successful sound-based methods for locating submarines and hidden ground-based artillery, as well as useful navigational and photographic equipment for aircraft. Because of the success of academic science in solving specific military problems, the NRC was retained after the war's end, though it gradually decoupled from the military.[14]
Many industrial and academic chemists and physicists came under military control during the Great War, but post-war research by the Royal Engineers Experimental Station at Porton Down and the continued operation of the National Research Council were exceptions to the overall pattern; wartime chemistry funding was a temporary redirection of a field largely driven by industry and later medicine, while physics grew closer to industry than to the military. The discipline of modern meteorology, however, was largely built from military funding. During World War I, the French civilian meteorological infrastructure was largely absorbed into the military. The introduction of military aircraft during the war as well as the role of wind and weather in the success or failure of gas attacks meant meteorological advice was in high demand. The French army (among others) created its own supplementary meteorological service as well, retraining scientists from other fields to staff it. At war's end, the military continued to control French meteorology, sending weathermen to French colonial interests and integrating weather service with the growing air corps; most of the early-twentieth century growth in European meteorology was the direct result of military funding.[15] World War II would result in a similar transformation of American meteorology, initiating a transition from an apprenticeship system for training weathermen (based on intimate knowledge of local trends and geography) to the university-based, science-intensive system that has predominated since.
If World War I was the chemists' war, World War II was the physicists' war. As with other total wars, it is difficult to draw a line between military funding and more spontaneous military-scientific collaboration during World War II. Well before the Invasion of Poland, nationalism was a powerful force in the German physics community (see Deutsche Physik); the military mobilization of physicists was all but irresistible after the rise of National Socialism. German and Allied investigations of the possibility of a nuclear bomb began in 1939 at the initiative of civilian scientists, but by 1942 the respective militaries were heavily involved. The German nuclear energy project had two independent teams, a civilian-controlled team under Werner Heisenberg and a military-controlled led by Kurt Diebner; the latter was more explicitly aimed at producing a bomb (as opposed to a power reactor) and received much more funding from the Nazis, though neither was ultimately successful.[16]
In the U.S., the Manhattan Project and other projects of the Office of Scientific Research and Development resulted in a much more extensive military-scientific venture, the scale of which dwarfed previous military-funded research projects. Theoretical work by a number of British and American scientists resulted in significant optimism about the possibility of a nuclear chain reaction. As the physicists convinced military leaders of the potential of nuclear weapons, funding for actual development was ratcheted up rapidly. A number of large laboratories were created across the United States for work on different aspects of the bomb, while many existing facilities were reoriented to bomb-related work; some were university-managed while others were government-run, but all were ultimately funded and directed by the military.[17] The May 1945 surrender of Germany, the original intended target for the bomb, did virtually nothing to slow the project's momentum. After Japan's surrender immediately following the atomic bombings of Hiroshima and Nagasaki, many scientists returned to academia or industry, but the Manhattan Project infrastructure was too large—and too effective—to be dismantled wholesale; it became the model for future military-scientific work, in the U.S. and elsewhere.[18]
Other wartime physics research, particularly in rocketry and radar technology, was less significant in popular culture but much more significant for the outcome of the war. German rocketry was driven by the pursuit of Wunderwaffen, resulting in the V-2 ballistic missile; the technology as well as the personal expertise of the German rocketry community was absorbed by the U.S. and the U.S.S.R. rocket programs after the war, forming the basis of long-term military funded rocketry, ballistic missile, and later space research. Rocket science was only beginning to make impact by the final years of the war. German rockets created fear and destruction in London, but had only modest military significance, while air-to-ground rockets enhanced the power of American air strikes; jet aircraft also went into service by the end of the war.[19] Radar work before and during the war provided even more of an advantage for the Allies. British physicists pioneered long-wave radar, developing an effective system for detecting incoming German air forces. Work on potentially more precise short-wave radar was turned over to the U.S.; several thousand academic physicists and engineers not participating the Manhattan Project did radar work, particularly at MIT and Stanford, resulting in microwave radar systems that could resolve more detail in incoming flight formations. Further refinement of microwave technology led to proximity fuzes, which greatly enhanced the ability of the U.S. Navy to defend against Japanese bombers. Microwave production, detection and manipulation also formed the technical foundation to complement the institutional foundation of the Manhattan Project in much post-war defense research.
In the years immediately following World War II, the military was by far the most significant patron of university science research in the U.S., and the national labs also continued to flourish.[20] After two years in political limbo (but with work on nuclear power and bomb manufacture continuing apace) the Manhattan Project became a permanent arm of the government as the Atomic Energy Commission. The Navy—inspired by the success of military-directed wartime research—created its own R&D organization, the Office of Naval Research, which would preside over an expanded long-term research program at Naval Research Laboratory as well as fund a variety of university-based research. Military money following up the wartime radar research led to explosive growth in both electronics research and electronics manufacturing.[21] The United States Air Force became an independent service branch from the Army and established its own research and development system, and the Army followed suit (though it was less invested in academic science than the Navy or Air Force). Meanwhile, the perceived communist menace of the Soviet Union caused tensions—and military budgets—to escalate rapidly.
The Department of Defense primarily funded what has been broadly described as “physical research,” but to reduce this to merely chemistry and physics is misleading. Military patronage benefited a large number of fields, and in fact helped create a number of the modern scientific disciplines. At Stanford and MIT, for example, electronics, aerospace engineering, nuclear physics, and materials science—all physics, broadly speaking—each developed in different directions, becoming increasingly independent of parent disciplines as they grew and pursued defense-related research agendas. What began as interdepartmental laboratories became the centers for graduate teaching and research innovation thanks to the broad scope of defense funding. The need to keep up with corporate technology research (which was receiving the lion's share of defense contracts) also prompted many science labs to establish close relationships with industry.[22]
The complex histories of computer science and computer engineering were shaped, in the first decades of digital computing, almost entirely by military funding. Most of the basic component technologies for digital computing were developed through the course of the long-running Whirlwind-SAGE program to develop an automated radar shield. Virtually unlimited funds enabled two decades of research that only began producing useful technologies by the end of the 50s; even the final version of the SAGE command and control system had only marginal military utility. More so than with previously established disciplines receiving military funding, the culture of computer science was permeated with a Cold War military perspective. Indirectly, the ideas of computer science also had a profound effect on psychology, cognitive science and neuroscience through the mind-computer analogy.[23]
The history of earth science and the history of astrophysics were also closely tied to military purposes and funding throughout the Cold War. American geodesy, oceanography, and seismology grew from small sub-disciplines in into full-fledged independent disciplines as for several decades, virtually all funding in these fields came from the Department of Defense. A central goal that tied these disciplines together (even while providing the means for intellectual independence) was the figure of the Earth, the model of the earth's geography and gravitation that was essential for accurate ballistic missiles. In the 1960s, geodesy was the superficial goal of the satellite program CORONA, while military reconnaissance was in fact a driving force. Even for geodetic data, new secrecy guidelines worked to restrict collaboration in a field that had formerly been fundamentally international; the Figure of the Earth had geopolitical significance beyond questions of pure geoscience. Still, geodesists were able to retain enough autonomy and subvert secrecy limitations enough to make use of the findings of their military research to overturn some of the fundamental theories of geodesy.[24] Like geodesy and satellite photography research, the advent of radio astronomy had a military purpose hidden beneath official astrophysical research agenda. Quantum electronics permitted both revolutionary new methods of analyzing the universe and—using the same equipment and technology—the monitoring of Soviet electronic signals.[25]
Military interest in (and funding of) seismology, meteorology and oceanography was in some ways a result of the defense-related payoffs of physics and geodesy. The immediate goal of funding in these fields was to detect clandestine nuclear testing and track fallout radiation, a necessary precondition for treaties to limit the nuclear weapon technology earlier military research had created. In particular, the feasibility of monitoring underground nuclear explosions was crucial to the possibility of a comprehensive rather than Partial Nuclear Test Ban Treaty.[26] But the military-funded growth of these disciplines continued even when no pressing military goals were driving them; as with other natural sciences, the military also found value in having ‘scientists on tap' for unforeseen future R&D needs.[27]
The biological sciences were also affected by military funding, but, with the exception of nuclear physics-related medical and genetic research, largely indirectly. The most significant funding sources for basic research before the rise of the military-industrial-academic complex were philanthropic organizations such as the Rockefeller Foundation. After World War II (and to some extent before), the influx of new industrial and military funding opportunities for the physical sciences prompted philanthropies to divest from physics research—most early work in high-energy physics and biophysics had been the product of foundation grants—and refocus on biological and medical research.
The social sciences also found limited military support from the 1940s to the 1960s, but much defense-minded social science research could be—and was—pursued without extensive military funding. In the 1950s, social scientists tried to emulate the interdisciplinary organizational success of the physical sciences' Manhattan Project with the synthetic behavioral science movement.[28] Social scientists actively sought to promote their usefulness to the military, researching topics related to propaganda (put to use in Korea), decision making, the psychological and sociological causes and effects of communism, and a broad constellation of other topics of Cold War significance. By the 1960s, economists and political scientists offered up modernization theory for the cause of Cold War nation-building; modernization theory found a home in the military in the form of Project Camelot, a study of the process of revolution, as well as in the Kennedy administration's approach to the Vietnam War. Project Camelot was ultimately canceled because of the concerns it raised about scientific objectivity in the context of such a politicized research agenda; though natural sciences were not yet susceptible to implications of the corrupting influence of military and political factors, the social sciences were.[29]
Historian Paul Forman, in his seminal 1987 article, proposed that not only had military funding of science greatly expanded the scope and significance of American physics, it also initiated "a qualitative change in its purposes and character."[30] Historians of science were beginning to turn to the Cold War relationship between science and the military for detailed study, and Forman's “distortionist critique” (as Roger Geiger has described it) served to focus the ensuing debates.[31]
Forman and others (e.g., Robert Seidel, Stuart Leslie, and for the history of the social sciences, Ron Robin) view the influx of military money and the focus on applied rather than basic research as having had, at least partially, a negative impact on the course of subsequent research. In turn, critics of the distortionist thesis, beginning with Daniel Kevles, deny that the military "seduced American physicists from, so to speak, a 'true basic physics'."[32] Kevles, as well as Geiger, instead view the effects of military funding relative to such funding simply being absent—rather than put to alternate scientific use.[33]
Most recent scholarship has moved toward a tempered version of Forman's thesis, in which scientists retained significant autonomy despite the radical changes brought about by military funding.[34]
Original source: https://en.wikipedia.org/wiki/History of military technology.
Read more |