There are several approaches to defining the substance and scope of technology policy.
According to the American scientist and policy advisor Lewis M. Branscomb, technology policy concerns the "public means for nurturing those capabilities and optimizing their applications in the service of national goals and interests".[1] Branscomb defines technology in this context as "the aggregation of capabilities, facilities, skills, knowledge, and organization required to successfully create a useful service or product".[1]
Other scholars differentiate between technology policy and science policy, suggesting that the former is about "the support, enhancement and development of technology", while the latter focuses on "the development of science and the training of scientists".[2] Rigas Arvanitis, at the Institut de recherche pour le développement in France, suggests that "science and technology policy covers all the public sector measures designed for the creation, funding, support and mobilisation of scientific and technological resources".[3]
Technology policy is a form of "active industrial policy", and effectively argues, based on the empirical facts of technological development as observed across various societies, industries and time periods, that markets rarely decide industrial fortunes in and of their own and state-intervention or support is required to overcome standard cases of market-failure (which may include, for example, under-funding of Research & Development in highly competitive or complex markets).[4]
Technology policy may be more broadly defined, and Michael G. Pollitt offers a multidisciplinary approach with social science and humanities perspective on "good" policy.[5]
Technology management at a policy or organisational level, viewed through the lens of complexity, involves the management of an inherently complex system. Systems that are "complex" have distinct properties that arise from these relationships, such as nonlinearity, emergence, spontaneous order, adaptation, and feedback loops, among others. According to Richard Cook, of the Cognitive technologies Laboratory at the University of Chicago "Complex systems are intrinsically hazardous systems. All of the interesting systems (e.g. transportation, healthcare, power generation) are inherently and unavoidably hazardous by the own nature. The frequency of hazard exposure can sometimes be changed but the processes involved in the system are themselves intrinsically and irreducibly hazardous. It is the presence of these hazards that drives the creation of defenses against hazard that characterize these systems."[6] The success or failure of organisations or firms depends on the effective management of innovation through technology policy programmes [7]
Technological determinism presumes that a society's technology drives the development of its social structure and cultural values.[8] The term is believed to have been coined by Thorstein Veblen (1857–1929), an American sociologist and economist. The most radical technological determinist in the United States in the 20th century was most likely Clarence Ayres who was a follower of Thorstein Veblen and John Dewey. William Ogburn was also known for his radical technological determinism.
Viewed through the lens of Science policy, public policy can directly affect the funding of capital equipment, intellectual infrastructure for industrial research, by providing tax incentives, direct funding or indirect support to those organizations who fund, and conduct, research. Vannevar Bush, director of the office of scientific research and development for the U.S. government in July 1945, wrote "Science is a proper concern of government"[9] Vannevar Bush directed the forerunner of the National Science Foundation, and his writings directly inspired researchers to invent the hyperlink and the computer mouse. The DARPA initiative to support computing was the impetus for the Internet Protocol stack. In the same way that scientific consortiums like CERN for high-energy physics have a commitment to public knowledge, access to this public knowledge in physics led directly to CERN's sponsorship of development of the World Wide Web and standard Internet access for all.
The first major elaboration of a technological determinist view of socioeconomic development came from the German philosopher and economist Karl Marx, whose theoretical framework was grounded in the perspective that changes in technology, and specifically productive technology, are the primary influence on human social relations and organizational structure, and that social relations and cultural practices ultimately revolve around the technological and economic base of a given society. Marx's position has become embedded in contemporary society, where the idea that fast-changing technologies alter human lives is all-pervasive.[8] Although many authors attribute a technologically determined view of human history to Marx's insights, not all Marxists are technological determinists, and some authors question the extent to which Marx himself was a determinist. Furthermore, there are multiple forms of technological determinism.[10] On the subject of technology as a means to liberation or enslavement, David Cooper wrote, "people myopically impressed by the world as an object of beauty or worship die out. Those who are myopically impressed by it as a source of energy do not: they even prosper".[11]
Although technological determinists believe in the continuous innovation of technology, many scientists believe that this innovation should be slowed down.[12] For example, with artificial intelligence gaining prominence throughout society, scientists fear that its potential of developing the cognitive skills of humans could force many individuals out of jobs and even put the lives of innocent people in danger.[13] Most famously, scientist and entrepreneur Elon Musk has is very public with the current progression of computing and AI; he believes that the fast rate at which artificial intelligence become smarter will place man in a vulnerable position where these newly created AI algorithms will identify humans as being expendable.[14] Although extreme, Musk and many other remain cautious around the progression of artificial intelligence and other technological advances that may render the power of man and do the opposite of technological determinism by destroying societies.[14]
Technology policy takes an "evolutionary approach" to technical change, and hereby relates to evolutionary growth theory, developed by Luigi Pasinetti, J.S. Metcalfe, Pier Paolo Saviotti, and Koen Frenken and others, building on the early work of David Ricardo.[15][16] J.S. Metcalfe noted in 1995 that "much of the traditional economic theory of technology policy is concerned with so-called 'market failures' which prevent the attainment of Pareto equilibria by violating one or other of die conditions for perfect competition".[17]
In contrast to the evolutionary paradigm, classic political science teaches technology as a static "black box". Similarly, neoclassical economics treats technology as a residual, or exogenous factor, to explain otherwise inexplicable growth (for example, shocks in supply that boost production, affecting the equilibrium price level in an economy). In the United States, the creation of the U.S. Office of Science and Technology Policy responded to the need policy approaches wherein not all technologies were treated as identical based on their social or economic variables. Technology policy is distinct from science studies but both have been influenced by Thomas Samuel Kuhn. Research in the technology policy domain recognizes the importance of, amongst others, Vannevar Bush, Moses Abramovitz, William J. Abernathy and James M. Utterback.
Technology policy approaches science as the pursuit of verifiable or falsifiable hypotheses, while science studies has a post-modern view whereby science is not thought to get at an objective reality. Technology policy is rarely post-modern. Its goal is the improvement of policy and organizations based on an evolutionary view, and understanding, of the underlying scientific and technological constraints involved in economic development, but also their potential. For example, some clean coal technologies via carbon sequestration and allocating electromagnetic spectrum by auction are ideas that emerged from technology policy schools. The Dominant design paradigm, developed by William J. Abernathy and James M. Utterback, is an idea with significant implications for innovation, market structure and competitive dynamics both within and between nations that emerged from empirical research in technology management, a domain of technology policy.
In the United States, net neutrality has been greatly discussed in politics; the idea of it is that corporations, governments, and internet providers should not discriminate against content on the internet.[18] This came about in the early 2000s when some internet providers such as Comcast and AT&T were restricting its customers from doing this like accessing virtual private network (VPNs) and using Wi-Fi routers. The term "net neutrality" was created by Tim Wu, a Columbia University law professor, who called for net neutrality laws due to his concern that restricting certain internet access would greatly inhibit long-term innovation.[19] Shortly after in 2005, the Federal Communications Commission (FCC), under the Bush administration, issued a policy statement restricted providers from disallowing users to access legal content on the internet while allowing American citizens to freely connect their devices to whichever internet connections they desire.[19] Shortly after its creation, the FCC began enforcing these new rules when in 2005 it found a North Carolina internet provider, Madison River, guilty of interrupting internet phone calls: the FCC dealt the company fines and demanded Madison River to halt its unlawful actions.[20]
It wasn't long until this policy statement's authority came into question when in 2008 Comcast sued the FCC. A federal court found that the FCC did not have the legal power to enforce the 2005 policy statement when they attempted to restrict Comcast from slowing its customers' connection to BitTorrent due to it greatly contributing to piracy.[21] This did not greatly impact the FCC's power, however, because in 2009 it forced Apple and AT&T to discontinue restricting its customers from making Skype calls.[20] With the Comcast case looming over the FCC, it desired to restructure its rules to make them stronger in court and in 2010, under the Obama administration, it did just that.
However, under this new legislation, Verizon filed another lawsuit against the FCC, and again, the federal court found that, under Title II of the Communications Act, that the FCC did not have the jurisdiction to regulate corporations who are not "common carriers".[22] To address this issue, the former FCC-chair Tom Wheeler decided to deem broadband carriers, like Verizon, to be "Title II carriers" enabling the agency to regulate them which then sparked the passing of a new net neutrality order in 2015. Still receiving lawsuits from many corporations, the new order finally held strong in federal court when the court declared that the agency's new rules were in fact under the authority of the FCC.[22]
Under the Trump administration, President Donald Trump appointed Ajit Pai as the new FCC chairman in January 2017 which lead to the voting out of the 2015 policy order in December 2017; under the new regulation, the rules of the 2015 order were dropped entirely and the regulation stated that broadband carriers were only required to publicly reveal how they were managing their networks.[23] Supporters of this new regulation claim that in reversing the former net neutrality policy, networks and internet providers will have more incentive to innovate and improve their networks by charging large companies for internet usage and introducing competition.[24] In October 2019, a federal appeals court ruled that the FCC's reversal of the 2015 policy order that imposed regulations was in fact lawful.[25]
One way governments use technology policy to their benefit is through the mass surveillance of its citizens.[26] Nations around the globe use these technologies and certain polices to listen to people's phone calls, read emails and text messages, track citizens' GPS, and many more actions claiming to be improving national safety for their country.[26]
However, some nations will abuse their power of mass surveillance and inhibit the freedom of its citizens.[26] Here are a few examples of nations currently employing mass surveillance:
Name of Country | Examples of surveillance |
---|---|
China |
Internet surveillance
Video surveillance |
India |
Telecommunications surveillance
|
Iran |
Internet censorship
|
North Korea |
Internet and information restrictions
|
United States |
Telecommunications and internet surveillance
|
With the prevalence of technology throughout the 2000s, its power in politics have raised concerns about the speed of technological change and difficulty in regulating it.[38] In the 2016 U.S. presidential election, Neil Jenkins, the director in the Office of Cybersecurity and Communications at the Department of Homeland Security, revealed that Russian government actors had hacked into the Democratic National Committee's servers to steal some of their information against the Republican candidate Donald Trump.[39]
The Russian infiltrators did not stop there, when new information showed that someone attempted to breach the election system by viewing the state's voter-registration database and stealing information on the registered voters.[40] Additionally, Arizona received cyber-attacks from the same IP addresses that had been used in the previous Illinois attacks to install malware. Not long after, Jenkins found that many other states had received attacks from this same IP address[40] and reports from the Senate Intelligence Committee that concluded Russia targeted every U.S. state.[41]
Given the breaches in the many different election systems in throughout the 2016 election, political figures nationwide have taken a firm stance against using electronic voting machines to avoid any future interference. One organization that leads the push toward U.S. paper voting is the Verified Voting Foundation; the foundation and its members believe that in order to protect the safety of U.S. elections in the future, government officials must be connected with experts in the field of technology to ensure unsecured and unreliable voting machines are not being used in the electoral process.[42] One of the board of directors, Barbra Simons, has gone as far to proclaiming that voting machines should be forbidden from U.S. elections as she, and many of her colleagues agree, that any data available online is subject to attack.[43]
Also in the 2016 election, the data firm Cambridge Analytica became heavily involved with the enacting of Donald Trump as the 45th president of the United States when his Trump campaign hired the firm to guide the data-collecting process of it. Cambridge Analytica managed to scrape data on over 50 million users that detailed the users' personal information.[44] The data originated from Aleksandr Kogan, a former psychology professor at the University of Cambridge, who gave Cambridge Analytica by using a data-extracting technique utilized at the university in which users filled out a personality survey and download an app.[44]
With this data, the company created personality profiles for the users and mapped their trends in likes and friends to direct certain ads toward the user.[45] Considering that 62% of adults receive their news on social networks like Facebook,[46] Cambridge Analytica influenced the result of the election which leaves many wondering what role big data should have in the electoral process. Due to the influence that big data had in this election, the call to limit access to it and its usage has sparked a movement toward creating policy to restrict companies access to data called the "Great Privacy Awakening".[47] In June 2018, California enacted the California Consumer Privacy Act, which states that companies must declassify what sort of data they collect and grant users the option to delete data.[48] This leaves the rest of the U.S. watching to see the effectiveness of the California law in hopes to further protect U.S. citizens from becoming a victim to more unethical data practices.
Many technological interventions in the everyday lives of citizens[49] are raising concern for the future of regulation.
Self-driving cars has grabbed the attention of many, including rideshare company Uber; in March 2018, the company tested an AI-driven vehicle in Tempe, Arizona, and during this test the vehicle struck and killed a 49-year-old woman.[50]
In this test, the self-driving vehicle was monitored by an Uber employee who they deemed a "watchdog".[51] It was later revealed that the reasoning for the accident had been due to an issue with the programming of the vehicle's AI; the company failed to create code capable of detecting jaywalkers. Rather than classifying the jaywalking pedestrian as a human, the code defined the woman as "other" which the code did not have a protocol to perform under; it wasn't until 1.2 seconds before impact that the code detected a bicycle and alerted the vehicle to brake that the car began to slow down which was too late to avoid the accident.[51]
It was later determined by an investigation conducted by the National Transportation Safety Board (NTSB) that the Uber "watchdog" had been distracted by their mobile device;[52] this news called for the U.S. government to create policy to protect citizens from further incidents. In result, the NTSB released new regulation that required companies testing autonomous vehicles on public roads to have their safety procedures thoroughly inspected and hand-recorded which would be subject to regulatory confirmation.[52]
Another emerging technology that has captivated individuals worldwide are the civil use of drones. These drones are aerial vehicles controlled from a secondary device like a remote control or cell phone that are commonly equipped with a camera uploading video to the user's device in real time, which has raised concerns about their safety and privacy of them. Many believe that these flying drones intrude on an individual's Fourth amendment right that protects an individual's privacy while others believe that the drones pose a threat of collisions with other aircraft.[53] In response to such concerns, in December 2015 the Federal Aviation Administration (FAA) created rules that stated owners of these civil drones must register them with the FAA while individual states have enforced stricter laws that restrict them from certain public areas.[53]
This innovation has also attracted the attention of corporations, like Amazon, wishing to perfect their operations; in a proposed plan to commercialize drone delivery, the company has created prototypes of Amazon Prime Air drones built to deliver packages to customers in 30 minutes or less.[54] With a vision of hundreds of AI-driven drones flying freely to households nationwide, many opponents of such innovations have privacy concerns, including Marc Rotenberg, the president of the Electronic Privacy Information Center.[55]
With these concerns in mind, in June 2016 the FAA released federal policy that made using drones much easier; companies would be able to fly drones under 55 pounds if they were operated by a person over 16 years old, flown below 400 feet, and were 5 miles away from an airport.[55] Although companies could use these drones, the FAA failed to allow drones to be used for commercial package delivery due to the restriction that the drone must stay in-sight of the operator.[55]
The study of technology policy, technology management or engineering and policy is taught at multiple universities.
Original source: https://en.wikipedia.org/wiki/Technology policy.
Read more |