Telecommunication is the transmission of signals over a distance for the purpose of communication. In modern times, this process typically involves the sending of electromagnetic waves by electronic transmitters, but in earlier years it may have involved the use of smoke signals, drums or semaphore. Today, telecommunication is widespread and devices that assist the process, such as the television, radio and telephone, are common in many parts of the world. There are also many networks that connect these devices, including computer networks, public telephone networks, radio networks and television networks. Computer communication across the Internet is one of many examples of telecommunication.
Telecommunication systems are generally designed by telecommunication engineers. Early inventors in the field include Alexander Graham Bell, Guglielmo Marconi, and John Logie Baird. Telecommunication is an important part of the world economy; this industry's revenue has been placed at just under 3 percent of the gross world product.
Etymology |
The word telecommunication was adapted from the French word télécommunication. It is a compound of the Greek prefix tele- (τηλε-), meaning 'far off,' and the Latin communicare, meaning 'to share.'[1] |
Basic elements
Each telecommunication system consists of three basic elements:
For example, consider a radio broadcast: In this case the broadcast tower is the transmitter, the radio is the receiver and the transmission medium is free space.
Each of the elements of the telecommunications system processes or carries an information-bearing signal. Each of the elements contributes undesired noise, so one of the figures of merit of a telecommunications system is its signal-to-noise ratio.
Often telecommunication systems are two-way and a single device acts as both a transmitter and receiver or transceiver. For example, a mobile phone is a transceiver. Telecommunication over a phone line is called point-to-point communication because it is between one transmitter and one receiver. Telecommunication through radio broadcasts is called broadcast communication because it is between one powerful transmitter and numerous receivers.[2]
Analog or digital
Signals can either be analog or digital. In an analogue signal, the signal is varied continuously with respect to the information. In a digital signal, the information is encoded as a set of discrete values (for example, 1s and 0s). During transmission, the information contained in analog signals will be degraded by noise. Conversely, unless the noise exceeds a certain threshold, the information contained in digital signals will remain intact. This represents a key advantage of digital signals over analog signals.[3]
Networks
A collection of transmitters, receivers or transceivers that communicate with each other is known as a network. Digital networks may consist of one or more routers that route data to the correct user. An analogue network may consist of one or more switches that establish a connection between two or more users. For both types of network, a repeater may be necessary to amplify or recreate the signal when it is being transmitted over long distances. This is to combat attenuation that can render the signal indistinguishable from noise.[4]
Channels
A channel is a division in a transmission medium so that it can be used to send multiple independent streams of data. For example, a radio station may broadcast at 96 MHz while another radio station may broadcast at 94.5 MHz. In this case the medium has been divided by frequency and each channel received a separate frequency to broadcast on. Alternatively one could allocate each channel a recurring segment of time over which to broadcast.[4]
The above usage of channel refers to analog communications. In digital communications, a time slot in a sequence of bits is a traditional time-division multiplexing channel. More complex digital telecommunications systems called statistical multiplexing precedes the information with a channel identifier, so bandwidth need not be allocated to silent channels. Modern packet-switching, as in X.25 or the Internet Protocol (IP) is a more generalized version of statistical digital multiplexing.
Modulation
The shaping of a signal to convey information is known as modulation. Modulation is a key concept in telecommunications and is frequently used to impose the information of one signal on another. Modulation is used to represent a digital message as an analogue waveform. This is known as keying and several keying techniques exist—these include phase-shift keying, frequency-shift keying, amplitude-shift keying and minimum-shift keying. Bluetooth, for example, uses phase-shift keying for exchanges between devices.[5]
However, more relevant to earlier discussion, modulation is also used to boost the frequency of analogue signals. This is because a raw signal is often not suitable for transmission over long distances of free space due to its low frequencies. Hence its information must be superimposed on a higher frequency signal (known as a carrier wave) before transmission. There are several different modulation schemes available to achieve this—some of the most basic being amplitude modulation and frequency modulation. An example of this process is a DJ's voice being superimposed on a 96 MHz carrier wave using frequency modulation (the voice would then be received on a radio as the channel “96 FM”).[6]
Telecommunication is an important part of many modern societies. In 2006, estimates place the telecommunication industry's revenue at $1.2 trillion or just under three percent of the gross world product.[7] Good telecommunication infrastructure is widely acknowledged as important for economic success in the modern world on both the micro- and macroeconomic scale.
On the microeconomic scale, companies have used telecommunication to help build global empires, this is self-evident in the business of online retailer Amazon.com but even the conventional retailer Wal-Mart has benefited from superior telecommunication infrastructure compared to its competitors.[8] In modern Western society, home owners often use their telephone to organize many home services ranging from pizza deliveries to electricians. Even relatively poor communities have been noted to use telecommunication to their advantage. In Bangladesh's Narshingdi district, isolated villagers use cell phones to speak directly to wholesalers and arrange a better price for their goods. In Cote d'Ivoire coffee growers share mobile phones to follow hourly variations in coffee prices and sell at the best price.[9] With respect to the macroeconomic scale, Lars-Hendrik Röller and Leonard Waverman suggested a causal link between good telecommunication infrastructure and economic growth in 2001.[10] Few dispute the existence of a correlation although some argue it is wrong to view the relationship as causal.[11]
Due to the economic benefits of good telecommunication infrastructure there is increasing worry about the digital divide. This stems from the fact that the world's population does not have equal access to telecommunication systems. A 2003 survey by the International Telecommunication Union revealed that roughly one-third of countries have less than one mobile subscription for every 20 people and one-third of countries have less than one fixed line subscription for every 20 people. In terms of internet access, roughly half of countries have less than one in 20 people with internet access. From this information, as well as educational data, the ITU was able to compile a Digital Access Index[12] that measures the overall ability of citizens to access and use information and communication technologies. Using this measure, countries such as Sweden, Denmark and Iceland receive the highest ranking while African countries such as Niger, Burkina Faso and Mali receive the lowest.[13]
Early forms of telecommunication include smoke signals and drums. Drums were used by natives in Africa, New Guinea and South America whereas smoke signals were used by natives in North America and China. Contrary to what one might think, these systems were often used to do more than merely announce the presence of a camp.[14][15]
In 1792, a French engineer, Claude Chappe, built the first fixed visual telegraphy (or semaphore) system between Lille and Paris.[16] However semaphore as a communication system suffered from the need for skilled operators and expensive towers at intervals of ten to thirty kilometers (six to nineteen miles). As a result of competition from the electrical telegraph, the last commercial line was abandoned in 1880.[17]
The first commercial electrical telegraph was constructed by Sir Charles Wheatstone and Sir William Fothergill Cooke and opened on April 9, 1839. Both Wheatstone and Cooke viewed their device as "an improvement to the [existing] electromagnetic telegraph" not as a new device.[18]
Samuel Morse independently developed a version of the electrical telegraph that he unsuccessfully demonstrated on September 2, 1837. His code was an important advance over Wheatstone's signaling method. The first transatlantic telegraph cable was successfully completed on July 27, 1866, allowing transatlantic telecommunication for the first time.[19]
The conventional telephone was independently invented by Alexander Graham Bell and by Elisha Gray in 1876.[20] Antonio Meucci in 1849 invented a device that allowed the electrical transmission of voice over a line. But Meucci's device was of little practical value because it relied upon the electrophonic effect and thus required users to place the receiver in their mouth to “hear” what was being said. The first commercial telephone services were set-up in 1878 and 1879 on both sides of the Atlantic in the cities of New Haven, Connecticut and London.[21][22]
In 1832 James Lindsay gave a classroom demonstration of wireless telegraphy to his students. By 1854 he was able to demonstrate a transmission across the Firth of Tay from Dundee, Scotland to Woodhaven, a distance of two miles, using water as the transmission medium.[23] In December 1901, Guglielmo Marconi established wireless communication between St. John's, Newfoundland (Canada) and Poldhu, Cornwall (England), earning him the Nobel Prize in physics in 1909 (which he shared with Karl Braun).[24] However, small-scale radio communication had already been demonstrated in 1893 by Nikola Tesla in a presentation to the National Electric Light Association.[25]
On March 25, 1925, John Logie Baird was able to demonstrate the transmission of moving pictures at the London department store Selfridges. Baird's device relied upon the Nipkow disk and thus became known as the mechanical television. It formed the basis of experimental broadcasts done by the British Broadcasting Corporation beginning September 30, 1929.[26] However, for most of the twentieth century, televisions depended upon the cathode ray tube invented by Karl Braun. The first version of such a television to show promise was produced by Philo Farnsworth and demonstrated to his family on September 7, 1927. [27]
On September 11, 1940, George Stibitz was able to transmit problems using teletype to his complex number calculator in New York and receive the computed results back at Dartmouth College in New Hampshire.[28] This configuration of a centralized computer or mainframe with remote dumb terminals remained popular throughout the 1950s. However it was not until the 1960s that researchers started to investigate packet switching—a technology that would allow chunks of data to be sent to different computers without first passing through a centralized mainframe. A four-node network emerged on December 5, 1969; this network would become ARPANET, which by 1981 would consist of 213 nodes.[29]
ARPANET's development centered on the Request for Comment process and on April 7, 1969, RFC 1 was published. This process is important because ARPANET would eventually merge with other networks to form the internet and many of the protocols the internet relies upon today were specified through this process. In September 1981, RFC 791 introduced the Internet Protocol v4 (IPv4) and RFC 793 introduced the Transmission Control Protocol (TCP)—thus creating the TCP/IP protocol that much of the internet relies upon today.
However not all important developments were made through the Request for Comment process. Two popular link protocols for local area networks (LANs) also appeared in the 1970s. A patent for the token ring protocol was filed by Olof Soderblom on October 29, 1974.[30] And a paper on the Ethernet protocol was published by Robert Metcalfe and David Boggs in the July 1976 issue of Communications of the ACM.[31] These protocols are discussed in more detail in the next section.
In a conventional wire telephone system, the caller is connected to the person he wants to talk to by the switches at various exchanges. The switches form an electrical connection between the two users and the setting of these switches is determined electronically when the caller dials the number. Once the connection is made, the caller's voice is transformed to an electrical signal using a small microphone in the caller's handset. This electrical signal is then sent through the network to the user at the other end where it transformed back into sound by a small speaker in that person's handset. This electrical connection works both ways, allowing the users to converse.[32] The fixed-line telephones in most residential homes are analog—that is, the speaker's voice wave directly determines the signal's voltage. Although short-distance calls may be handled from end-to-end as analog signals, usually telephone service providers transparently convert the signals to digital for switching and transmission before converting them back to analogue for reception. The advantage of this is that digitized voice data can travel more cheaply, side-by-side with data from the internet and can be perfectly reproduced in long distance communication as opposed to analogue signals which are inevitably impacted by noise.
Mobile phones have had a significant impact on telephone networks. Mobile phone subscriptions now outnumber fixed-line subscriptions in many markets. Sales of mobile phones in 2005 totaled 816.6 million with that figure being almost equally shared amongst the markets of Asia/Pacific (204 million), Western Europe (164 million), CEMEA (Central Europe, the Middle East and Africa) (153.5 million), North America (148 million) and Latin America (102 million).[33] In terms of new subscriptions over the five years from 1999, Africa has outpaced other markets with 58.2 percent growth.[34] Increasingly these phones are being serviced by systems where the voice content is transmitted digitally such as GSM or W-CDMA with many markets choosing to depreciate analog systems such as AMPS.[35]
There have also been dramatic changes in telephone communication behind the scenes. Starting with the operation of TAT-8 in 1988, the 1990s saw the widespread adoption of systems based upon optic fibers. The benefit of communicating with optic fibers is that they offer a drastic increase in data capacity. TAT-8 itself was able to carry ten times as many telephone calls as the last copper cable laid at that time and today's optic fiber cables are able to carry 25 times as many telephone calls as TAT-8.[22] This drastic increase in data capacity is due to several factors. First, optic fibers are physically much smaller than competing technologies. Second, they do not suffer from crosstalk, which means several hundred of them can be easily bundled together in a single cable.[36] Lastly, improvements in multiplexing have lead to an exponential growth in the data capacity of a single fiber.[37][38]
Assisting communication across these networks is a protocol known as Asynchronous Transfer Mode (ATM) that allows the side-by-side data transmission mentioned in the first paragraph. The importance of the ATM protocol is chiefly in its notion of establishing pathways for data through the network and associating a traffic contract with these pathways. The traffic contract is essentially an agreement between the client and the network about how the network is to handle the data, if the network can not meet the conditions of the traffic contract it does not accept the connection. This is important because telephone calls can negotiate a contract so as to guarantee themselves a constant bit rate, something that will ensure a caller's voice is not delayed in parts or cut-off completely.[39] There are competitors to ATM, such as Multiprotocol Label Switching (MPLS), that perform a similar task and are expected to supplant ATM in the future.[40]
In a broadcast system a central high-powered broadcast tower transmits a high-frequency electromagnetic wave to numerous low-powered receivers. The high-frequency wave sent by the tower is modulated with a signal containing visual or audio information. The antenna of the receiver is then tuned so as to pick up the high-frequency wave and a demodulator is used to retrieve the signal containing the visual or audio information. The broadcast signal can be either analogue (signal is varied continuously with respect to the information) or digital (information is encoded as a set of discrete values).[41][42]
The broadcast media industry is at a critical turning point in its development, with many countries moving from analogue to digital broadcasts. This move is made possible by the production of cheaper, faster and more capable integrated circuits. The chief advantage of digital broadcasts is that they prevent a number of complaints with traditional analogue broadcasts. For television, this includes the elimination of problems such as "snowy" pictures, ghosting and other distortion. These occur because of the nature of analogue transmission, which means that perturbations due to noise will be evident in the final output. Digital transmission overcomes this problem because digital signals are reduced to binary data upon reception and hence small perturbations do not affect the final output. In a simplified example, if a binary message 1011 was transmitted with signal amplitudes [1.0 0.0 1.0 1.0] and received with signal amplitudes [0.9 0.2 1.1 0.9] it would still decode to the binary message 1011—a perfect reproduction of what was sent. From this example, a problem with digital transmissions can also be seen in that if the noise is great enough it can significantly alter the decoded message. Using forward error correction, a receiver can correct a handful of bit errors in the resulting message but too much noise will lead to incomprehensible output and hence a breakdown of the transmission.[43]
In digital television broadcasting, there are three competing standards that are likely to be adopted worldwide. These are the ATSC, DVB and ISDB standards and the adoption of these standards thus far is presented in the captioned map. All three standards use MPEG-2 for video compression. ATSC uses Dolby Digital AC-3 for audio compression, ISDB uses Advanced Audio Coding (MPEG-2 Part 7) and DVB has no standard for audio compression but typically uses MPEG-1 Part 3 Layer 2.[44] The choice of modulation also varies between the schemes.
In digital audio broadcasting, standards are much more unified with practically all countries choosing to adopt the Digital Audio Broadcasting standard (also known as the Eureka 147 standard). The exception being the United States, which has chosen to adopt HD Radio. HD Radio, unlike Eureka 147, is based upon a transmission method known as in-band on-channel transmission—this allows digital information to "piggyback" on normal AM or FM analog transmissions, avoiding the bandwidth allocation issues of Eureka 147 and therefore being strongly advocated National Association of Broadcasters, who felt there was a lack of new spectrum to allocate for the Eureka 147 standard. In terms of audio compression, DAB like DVB can use a variety of codecs but typically uses MPEG-1 Part 3 Layer 2 and HD Radio uses High-Definition Coding.
However, despite the pending switch to digital, analog receivers still remain widespread. Analog television is still transmitted in practically all countries. The United States had hoped to end analog broadcasts by December 31, 2006, however this was pushed back to February 17, 2009.[45] For analog, there are three standards in use. These are known as PAL, NTSC and SECAM.
For analog radio, the switch to digital is made more difficult by the fact that analogue receivers cost a fraction of the cost of digital receivers. For example while you can get a good analog receiver for under US$20; a digital receiver will set you back at least US$75. The choice of modulation for analogue radio is typically between amplitude modulation (AM) or frequency modulation (FM). To achieve stereo playback, an amplitude modulated subcarrier is used for stereo FM and quadrature amplitude modulation is used for stereo AM or C-QUAM.
The Internet is a worldwide network of computers that mostly operates over the public switched telephone network. Any computer on the Internet has a unique IP address that can be used by other computers to route information to it. Hence any computer on the Internet can communicate with any other computer and the Internet can therefore be viewed as an exchange of messages between computers.[46] An estimated 16.9 percent of the world population has access to the Internet with the highest participation (measured as percent of population) in North America (69.7 percent), Oceania/Australia (53.5 percent) and Europe (38.9 percent).[47] In terms of broadband access, countries such as Iceland (26.7 percent), South Korea (25.4 percent) and the Netherlands (25.3 percent) lead the world.[48]
The Internet works in part because of protocols that govern how the computers and routers communicate with each other. The nature of computer network communication lends itself to a layered approach where individual protocols in the protocol stack run largely independently of other protocols. This allows lower-level protocols to be customized for the network situation while not changing the way higher-level protocols operate. A practical example of why this is important is because it allows an Internet browser to run the same code regardless of whether the computer it is running on is connected to the Internet through an Ethernet or Wi-Fi connection. Protocols are often talked about in terms of their place in the OSI reference model—a model that emerged in 1983 as the first step in a doomed attempt to build a universally adopted networking protocol suite.[49] The model itself is outlined in the picture to the right. It is important to note that the Internet's protocol suite, like many modern protocol suites, does not rigidly follow this model but can still be talked about in the context of this model.
For the Internet, the physical medium and data link protocol can vary several times as packets travel between client nodes. Though it is likely that the majority of the distance traveled will be using the Asynchronous Transfer Mode (ATM) data link protocol (or a modern equivalent) across optical fiber this is in no way guaranteed. A connection may also encounter data link protocols such as Ethernet, Wi-Fi and the Point-to-Point Protocol (PPP) and physical media such as twisted-pair cables and free space.
At the network layer things become standardized with the Internet Protocol (IP) being adopted for logical addressing. For the world wide web, these “IP addresses” are derived from the human readable form (for example, 72.14.207.99 is derived from www.google.com) using the Domain Name System. At the moment the most widely used version of the Internet Protocol is version four but a move to version six is imminent. At the transport layer most communication adopts either the Transmission Control Protocol (TCP) or the User Datagram Protocol (UDP). Broadly speaking, TCP is used when it is essential every message sent is received by the other computer where as UDP is used when it is merely desirable. With TCP, packets are retransmitted if they are lost and placed in order before they are presented to higher layers (this ordering also allows duplicate packets to be eliminated). With UDP, packets are not ordered or retransmitted if lost. Both TCP and UDP packets carry port numbers with them to specify what application or process the packet should be handed to on the client's computer.[50] Because certain application-level protocols use certain ports, network administrators can restrict Internet access by blocking or throttling traffic destined for a particular port.
Above the transport layer there are certain protocols that loosely fit in the session and presentation layers and are sometimes adopted, most notably the Secure Sockets Layer (SSL) and Transport Layer Security (TLS) protocols. These protocols ensure that the data transferred between two parties remains completely confidential and one or the other is in use when a padlock appears at the bottom of your web browser. Another protocol that loosely fits in the session and presentation layers is the Real-time Transport Protocol (RTP) most notably used to stream QuickTime video.[51] Finally at the application layer are many of the protocols Internet users would be familiar with such as HTTP (web browsing), POP3 (e-mail), FTP (file transfer) and IRC (Internet chat) but also less common protocols such as BitTorrent (file sharing) and ICQ (instant messaging).
Despite the growth of the Internet, the characteristics of local area networks (computer networks that run over at most a few kilometers) remain distinct. This is because networks on this scale do not require all the features associated with larger-scale systems and are often more cost-effective and speedier without them.
In the mid-1980s, several protocol suites emerged to fill the gap between the data link and applications layer of the OSI reference model. These were AppleTalk, IPX and NetBIOS with the dominant protocol suite during the early 1990s being IPX due to its popularity with MS-DOS users. TCP/IP existed at this point but was typically only used by large government and research facilities.[52] However as the Internet grew in popularity and a larger percentage of local area network traffic became Internet-related, LANs gradually moved towards TCP/IP and today networks mostly dedicated to TCP/IP traffic are common. The move to TCP/IP was helped by technologies such as DHCP introduced in RFC 2131 that allowed TCP/IP clients to discover their own network address—a functionality that came standard with the AppleTalk/IPX/NetBIOS protocol suites.
However it is at the data link layer that modern local area networks diverge from the Internet. Where as Asynchronous Transfer Mode (ATM) or Multiprotocol Label Switching (MPLS) are typical data link protocols for larger networks, Ethernet and Token Ring are typical data link protocols for local area networks. The latter LAN protocols differ from the former protocols in that they are simpler (for example, they omit features such as quality of service guarantees) and offer collision prevention. Both of these differences allow for more economic setups. For example, omitting quality of service guarantees simplifies routers and the guarantees are not really necessary for local area networks because they tend not to carry real time communication (such as voice communication). Including collision prevention allows multiple clients (as opposed to just two) to share the same cable again reducing costs.[53]
Despite Token Ring's modest popularity in the 1980s and 1990s, with the advent of the twenty-first century, the majority of local area networks have now settled on Ethernet. At the physical layer most Ethernet implementations use copper twisted-pair cables (including the common 10BASE-T networks). Some early implementations used coaxial cables. And some implementations (especially high speed ones) use optical fibers. Optical fibers are also likely to feature prominently in the forthcoming 10-gigabit Ethernet implementations.[54] Where optical fiber is used, the distinction must be made between multi-mode fiber and single-mode fiber. Multi-mode fiber can be thought of as thicker optical fiber that is cheaper to manufacture but that suffers from less usable bandwidth and greater attenuation (that is poorer performance).
All links retrieved January 20, 2020.
New World Encyclopedia writers and editors rewrote and completed the Wikipedia article in accordance with New World Encyclopedia standards. This article abides by terms of the Creative Commons CC-by-sa 3.0 License (CC-by-sa), which may be used and disseminated with proper attribution. Credit is due under the terms of this license that can reference both the New World Encyclopedia contributors and the selfless volunteer contributors of the Wikimedia Foundation. To cite this article click here for a list of acceptable citing formats.The history of earlier contributions by wikipedians is accessible to researchers here:
The history of this article since it was imported to New World Encyclopedia:
Note: Some restrictions may apply to use of individual images which are separately licensed.