A comprehensive history of communication
What's the internet, what's the web, and how they came to be.
A short introduction
Communication is defined as the act of exchanging information. Since the beginning of humanity, humans have been developing methods of communications using new tools. Starting with inscriptions in caves, to the use of verbal queues, to the development of languages and writings. These advances ultimately led to the start of civilization as information was exchanged from one person to another, and from one generation to the next.
In this essay, we will discuss the history and future of communication technology. We will begin by covering the benchmarks in communication technology, focusing specifically on telecommunication devices. Next, we will describe the most common forms of telecommunication technology that are used in present day. Then, we will introduce Starlink, and the next generation of satellite internet providers that will provide wireless internet to remote areas around the globe. Finally, we will weigh the pros and cons of this recent technology and discuss whether it is beneficial to society.
Since before the first century, humans started using devices to communicate with one another over a distance. One of the earliest methods being by the roman empire, who used metal mirrors to reflect the sun. This is now known as telecommunication. Telecommunication is more formally defined as the sending and receiving of messages over a distance for the purpose of communication. Modern telecommunication technology can be divided into four major categories: Telegraphs, Landline Telephones, Radio and Televisions and Internet.
Communication in the First Industrial Revolution
Visual Telegraph
Before electricity, visual telegraphs were used as a form of telecommunication. The most successful visual telegraph was the semaphore, invented in 1791 by the Chappe brothers. The semaphore consisted of a tower with a crossbeam above, on each end of the cross beam were two smaller beams. Each of the three members could pivot in different angles to represent different letters, numbers, or symbols. On average, three symbols were displayed per minute. Towers were constructed in chains spaced between 5 to 10 kilometers and were each individually operated. To communicate, the first operator would rotate the arms to represent a symbol, then waited for the next tower to copy it before displaying the next symbol. This would continue down the chain until the message reached its destination.
Communication in the Second Industrial Revolution
The Electronic Telegraph
With the introduction of electricity came more methods of communication using telegraphs. In 1835, Samuel Morse and Alfred Vail invented the electromagnetic Morse telegraph. This device used Morse code which consist of a series of dots and dashes that represent different letters and numbers. The sender and receiver are connected through low voltage wires and the sender would tap or hold a Morse telegraph to represent either a dot or a dash. On the other end, the receiver would instantly obtain an embossed print with the message in Morse code. With financial help from the government, Morse and Vail built a telegraph system connecting Washington D.C. and Baltimore Maryland. This form of telecommunication was most common until it was replaced by the telephone 100 years later.
The Telephone
On March 7th, 1876, Alexander Graham Bell was granted the patent for a device that transmits sound telegraphically. Three days later the first successful telephone transmission was made. Bells design was like the technology of the electromagnetic telegraph but now transmitted sound.
The design worked by converting vocal vibrations into electric currents which were sent to the receiver through landlines. Early telephone calls worked by connecting your personal telephone to a switchboard operator who then manually connected you to the person you wish to call, this was known as Circuit Switching. Long distance calls required additional connections to multiple switchboard operators. In 1891, the inventor Almon Strowger patented the direct dial which eventually made switchboard operators obsolete. Switchboards remained in operation until the early 1990s for some rural areas in California and New South Wales in Australia.
The Radio & Television
After the discovery of electromagnetic waves, communication through telegraphs became wireless. In the late 19th century, Morse code was used to communicate to and from ships and land. In the mid-1890s, Italian inventor Marconi used the technical developments discovered in radio waves to patent a radiotelegraphy device or as we simply know it, the radio. This technology quickly caught on and became the primary choice for long distance communication. For the next 20 years radio broadcasters and receivers became a staple in countries around the world. In 1920, a Pittsburgh radio station KDKA became the first commercial radio station to play scheduled music and talk radio. Two years later, there were over 550 scheduled stations across America.
As the technology of radio waves evolved, so did the race to send images wirelessly. In 1922, the Scottish inventor John Baird was credited for successfully transmitting an image to a television through radio waves. Three years later, in 1925, Baird transmitted the first live video of a human face through radio waves. Like radios, the television sets quickly caught on as a form of receiving news and media. In 1946 it was estimated that only 6000 Americans owned personal television sets. That number increased by 2000 percent by 1951, with 12 million television sets across the United State, and by 1955 fifty percent of American homes owned a television set.
Communication in the Third Industrial Revolution
The internet
Internet started taking shape when a societal need for it started to emerge, In the 1950s colleges started separating computers from terminals, where the terminals sent data to the centralized computer, this allowed inexperienced users to safely use the terminal without risking damaging the computer. As the government invested more money in supercomputers in universities, they wanted to implement the same idea to allow the usage of these supercomputers from a distance, allowing wider access to other research facilities. This was an early structure of what we now call “Cloud Computing” where less capable computers have access to the computing power of more capable computers through the use of information exchange.
Genesis: The creation of the internet
In 1969 the US Department of Defense created Advanced Research Projects Agency (ARPA) in an attempt to keep its technological advancements ahead of the Soviet Union’s. In doing so, ARPA funded research into a new method of communication, a computer network that connects the nation’s research facilities. ARPA started the network by connecting four colleges from around the US and called this network “ARPANET”. ARPANET kept developing and introducing new technologies and protocols that still shape the internet today.
To allow communication between colleges, ARPANET used phone line connections between computers, though innovating their use. To make information flow faster and more efficiently, ARPANET got rid of Circuit Switching and instead introduced Packet Switching. The need for Packet Switching came from the fact that the network of computers needs to
communicate in a multiplex, while Circuit Switching could only allow one computer to be connected to just another computer at a time. Packet Switching uses the same set of wires to send different data over the network, this was achieved by communicating through “packets”. Packets contain a string of data that holds information, this string of data also includes an “address” this address represents the destination of this message (the other computer). The packet travels from one computer on the network to the computer that is nearest to it, yet still closest to the destination, traversing the network and finally reaching the destination. This meant that each computer had the ability to decrypt the packet and lookup ALL the other computers on the network to relay the packet correctly. This new way of data transferal got rid of a lot of operational costs, like Circuit Switching Operators. Making this technology not only faster and more efficient, but also cheaper.
As ARPANET grew, a need to create a new method of packet relaying was needed, this was because the system relied on storing all the network’s updated computer addresses (Address Book) on all the network’s computers. So instead, ARPANET elected Stanford to act as a central computer that stored all the addresses in a Centralized Address Book, allowing the network to over 100 computers by 1977.
This new way of communication was so efficient and innovative that new networks of computers using the Packet Switching technology emerged all over the globe, with networks that surpassed ARPANET in size. The standardization of Packet Switching throughout all of the different networks led to the realization of a global communication network of more than 20,000 computers by 1987. Through a protocol that standardized how the packets are to be formatted in networks worldwide, and a new decentralized system that effectively relayed packets. This was
the start of the Internet. By connecting other networks to ARPANET, more computers could communicate over the broader network, making ARPANET the backbone of the internet.
The Transmission Control Protocol/Internet Protocol (TCP/IP) and the Domain Name System (DNS) are still used today. The TCP/IP acts as a global language capable of labeling packets with addresses worldwide, and the DNS acts as a network that effectively handles packets destinations and connections.
In 1990, the Department of Defense closed the ARPANET project as it has grown from just a network of computers that connects the nation’s supercomputers, to laying the grounds for a global network of communication. In doing so, ARPANET gave the power of internet governance and maintenance to the National Science Foundation Network (NSFNET), making NSFNET the backbone of the internet with more than 500,000 users.
Before NSFNET, the internet was only reserved for non-commercial use. This meant that although some private commercial networks existed, they were not allowed on the internet, as the internet was government owned and funded. In 1988 (before NSFNET was the backbone of the internet), NSFNET experimented with limited commercial use of their network, by connecting their network to select commercial private networks, giving rise to email.
In 1992 congress allowed the commercial use of the Internet, this meant that any private commercial network can now join the Internet and allow regular users access. This also gave rise to Internet Service Providers (ISPs). ISPs usually did not own or operate a network, instead they just connected regular users to a local network then the Internet. By 1995 ISPs grew large enough that NSFNET shutdown allowing ISPs to become the backbone of the internet
Beep Boop: The Dial-up internet
All of these innovations laid the technological infrastructure for “Dial-up” internet. Dial-up internet made use of the existing internetwork of phonelines that were already in place to give “Internet access” to regular users. To connect to a network, the computer would make a phone call to the network through a modem, then it would communicate with the computers on the network through the landlines. This acted as a bottleneck, high amounts of data need high frequencies that analog communications through landlines cannot support, this limited the amount of data that can be transferred to a maximum of 56 Kbps.
Broadband Internet
Today, communication through this global network of computers (The Internet) works in a similar way to ARPANET. ISPs provide the necessary communication lines and technologies to connect a user’s device to the internet (These devices vary from mobile phones to smart lightbulbs, still they are all part of the Internet). Data Packets still travel in those communication channels provided by the ISP and are relayed using the same standards and protocols that ARPANET developed. All these packets however, are sent through much more efficient channels, using Broadband Technology.
Broadband Technology is an umbrella term used to describe transmission channels that are capable of communicating more than 25 Mb of data per second (approximately 450 times the speed of Dial-up). This rate of transferal is what is now considered a baseline for a good internet connection (i.e., access to the world). Broadband Technology comes in many forms but can be generally split into two categories: Wireline and Wireless. Dial-up is a Wireline technology, yet it lacks broad bandwidth (the capability to hold high frequencies of signals). The wires it uses are
the landlines, similarly Digital Subscription Line (DSL) uses landlines, but uses digital communications rather than analog. DSL is capable of supporting up to 100 Mb of data per second. Another Wireline technology is Broadband cable. Broadband cable leverages the infrastructure of cable TV and allows up to 1000 Mb of data transferal per second. The most interesting Wireline technology however is Optic Fiber. Optic Fiber is a type of wire that is made of fiber glass - contrary to other Wireline Broadband technologies that all use metallic wires to transfer electric signals containing data - Optic Fiber leverages the use of light to transfer pulses of data that can reach a speed of up to 2000 Mbps. Currently, Optic Fiber wiring provides the backbone of the Internet, providing points of access all around the world through laid cables that span land, oceans, and seas.
On the other hand, Wireless Broadband technologies do not need the use of cables as they transmit data using high frequency waves. These waves are relayed in different ways, mainly Satellites or Cellular Towers. Today, these technologies are being explored to make use of the wireless networking (access to internet). With Cellular Towers, 5G is promising data transferal speeds of up to 1800 Mbps, almost reaching Optic Fiber speeds. With Satellites however, companies like StarLink are targeting rural areas that have underdeveloped Wireline infrastructures (and in turn Cellular Towers) with promised speeds of up to 300 Mbps.
The impact of the internet on communication
In the wake of COVID-19, the internet evolved from being a secondary source of communication to a primary source. With the closure of offices, universities, and buildings, the Internet was the only technology capable of creating a medium of communication that is free from physical contact. 2020 proved the importance of the Internet and the reliance that society
has on it. This being said, most rural areas lack access to the Internet as the physical wiring that lays the foundation of networking is expensive to install, making the low urbanization of rural areas not cost effective in terms of creating the needed infrastructure to support Broadband Internet.
Communication in the Fourth Industrial Revolution?
Starlink: Globalizing the internet
To counter this problem, StarLink is leading the way using Satellite Broadband Technology. StarLink has quickly made a name for itself as the fastest satellite internet. It offers high speed, low latency, and cheap internet access. It is a project funded by Elon Musk's SpaceX; an aerospace company made with the goal of eventually colonizing Mars through reduced space transportation costs. It has had a steady rate of increased satellites launched as well as an increase in the number of users and an internet speed of 50-150 mbps.
Since its creation, StarLink drastically improved since beta testing began in 2019. Firstly, the internet speed began at 20-25 mbps, and it has now reached around 150 mbps, with some users reporting that it has even reached 215 mbps. Secondly, the beta testing phase reported a few moments of the internet lagging due to extreme weather, however this has been repaired. Third, they began with 1,000 satellites being launched and this number has now increased to 12,000. Furthermore, the number of users has also exponentially increased from 10,000 to 100,000 in around 2 years. Finally, beta testing only included 3 countries which are the US, Canada, and the UK, however it now spans over 12 different countries and this will continue to
increase. Moving forward, StarLink has ambitious plans to launch 42,000 satellites into low earth orbit (LOE) by the end of 2027.
Since satellite internet is known to be extremely laggy, StarLink satellites have a much lower orbit of 550 kilometers (340 miles) above the Earth’s surface. The aim of these satellites is to eventually deliver one gigabit per second of internet speed. Sixty satellites were deployed by the Falcon 9 at an altitude of 440 kilometers above the Earth’s surface on May 9th, 2021. From there, the satellites slowly spread around Earth using their individual thrusters and gained altitudes up to 160 kilometers higher than their initial deployment. Due to the fact that StarLink satellites orbit at a much lower altitude, they cover a much smaller geographical area than other communication satellites. This is why thousands of StarLink satellites are needed to cover the globe. In order to get service from these satellites an antenna is not enough. A ground station within the range 500 miles is needed to gain access.
Creating the New Internet Backbone
On January 24th, 2021, SpaceX launched its Transporter-1 mission with a record breaking 143 satellites on board. What makes this mission so important was that only 10 of the total 143 satellites were StarLink satellites, with the remainder of them being smallsats from other customers and companies, making StarLink satellite launching cost effective. Furthermore, this was SpaceX’s 23rd consecutive landing with the reusable Falcon 9 rocket, a record for the company at the time. Now that we know what StarLink is and what its providing, why is SpaceX not maximizing how many StarLink satellites they can send into orbit? Why are customers and companies having their satellites on board the Falcon 9 rocket and taking up most of the payload space? The strategy behind this is quite genius and to understand it we need to observe how the satellite industry has evolved in the past two decades.
Currently, SpaceX uses their Falcon 9 reusable rocket to send their StarLink satellites into low orbit with the goal to eventually move to the even better starship rocket. A launch of a loaded Falcon 9 rocket costs in the ballpark of 50 to 60 million dollars, which does not include the cost of manufacturing the StarLink satellites themselves. It is estimated that each StarLink satellite costs roughly $250,000 each to manufacture. Although this is cheap in the satellite industry, the cost of StarLink launch is undeniably expensive. With the goal of StarLink to eventually having over 40,000 satellites in orbit, the costs of getting this web of satellites up and running will cost several billions of dollars. With such high costs, how has SpaceX managed to fund this project? The answer comes in the form of the ever-increasing popularity of small satellites over the past two decades.
In recent years, there has been an increase in popularity of small research satellites, as opposed to the massive satellites that used to be used several decades ago. It is no secret that technology has been rapidly increasing over the years and what was once a computer that took up an entire room can now be found in the pockets of billions of people around the world. With satellites getting smaller and launches getting cheaper with reusable rockets, there has been a focus in low-cost exploration. Just how small are small satellites? A Smallsat is a satellite that is less than 500kg in size and mass and can be further classified into categories like microsatellites, which are between 10kg to 100kg in mass, and nanosatellites, which range from 1kg to 10kg. Many of the small companies or research teams in universities have focused their research and builds on what are known as CubeSats, small cube-shaped research satellites made up of 10cmx10cmx10cm modules that can be sent alone as a single unit or stacked together to a maximum of 24 units. Each unit weighs roughly 1kg and are incredibly simple in design. Historically, these CubeSats have been sent up to the International Space Station (ISS) on
government rockets and launched from the ISS, either by propulsion or by hand. As we could imagine, not only are the costs of doing this very high, but a company would have to wait until the next mission to the ISS was taking place to send their satellite up. In 2012, less than 100 CubeSats were successfully deployed into orbit, and that number has risen all the up to nearly 1557 by 2020, with only 100 of those satellites not reaching orbit. This drastic growth is due to how much cheaper space travel has become in recent years. What does all of this have to do with SpaceX? One of SpaceX’s competitors, Rocket labs Electron Rocket, offers companies or single customers dedicated launches to space. This means that if a customer has the money, they could send their satellites to space without having to use a government rocket or wait for a specific date. With the cost of a Rocket labs Electron rocket being $7.5 Million, sending a satellite into any region of space has become cheap and convenient, especially when we consider the $50 Million cost of the Falcon 9 launch. SpaceX’s solution to the competition came with the creating of the Smallsat Rideshare program.
The SpaceX Rideshare program is essentially like rideshare here on earth but for satellites. Rideshare is a service where customers can book slots in the cargo area of the Falcon 9 rocket, essentially allowing their satellites to hitch a ride up to space for much less than the cost of a dedicated rocket launch. Each purchasable slot on the SpaceX rocket has a maximum capacity of either 450kg for the 15-inch port or an even larger 830kg for the 24-inch port. A port for a 200kg payload and can be purchased for a starting price of just 1 million dollars, with prices increasing as the weight increases. SpaceX makes the process of buying a slot on the rocket as easy as booking a plane ticket. They require that customers verify the compatibility of their satellites with the mechanical and electrical launch systems of their rocket, as well as make sure the
satellites can withstand loads, vibrations, pressures and more that come with the launch. Customers can download the rideshare payload user’s guide from the rideshare website, which provides not only all the requirements for their satellites, but all the different adapters and interface rings and port sizes available. To book a spot on an upcoming rideshare launch, a company just goes to the SpaceX rideshare website, pick their desired orbit and launch time, input their payload mass, picks the type of port being used to connect to the rocket, and pay an initial deposit. However, what does a customer do if they have a microsatellite or CubeSat? Recall that the CubeSat as an individual unit weighs no less than 1kg and even in a bundle weight much less than the maximum payload of a spot on the rocket. The solution to this is that satellite companies that buy a slot on the Falcon 9 rocket can choose to share their slot with even smaller satellites, allowing customers with CubeSats to pay a fraction of the cost to launch their satellites.
If we go back to the Transporter-1 mission in January, we see that only 10 of the satellites on board were StarLink satellites, and this was intentional. The Falcon 9 rocket can hold up to 60 StarLink satellites if they are the only payload on board. By allowing other satellite companies, customers, and university research teams to purchase slots on the Falcon 9 rocket, the $50 million launch cost of the rocket is entirely covered by these companies. Although SpaceX can only fit one sixth of the StarLink satellites on a rideshare missions, the cost of sending them into orbit is essentially free, meaning SpaceX only needs to pay for the manufacturing of the satellites and not to send them into orbit. This is also great for the other satellite companies and startups because they can now send their research satellites into orbit for a fraction of the cost it, allowing for space exploration to progress.
The future of satellite launches is clearly in reusable rockets. SpaceX can reduce environmental impact as well as save money by using reusable rockets like the Falcon 9. The next big step for SpaceX is the development of their Starship rocket, which is SpaceX’s fully reusable transportation system. This even bigger rocket is designed to carry both a crew and cargo to not only Earth’s orbit, but the moon and beyond. For StarLink, Starship will be able to carry 240 StarLink satellites, 4 times more satellites than the current Falcon 9 rocket. With the ultimate goal of SpaceX to land on mars, they can provide services like StarLink on Earth while continuing their mission to get to mars.
StarLink vs Fiber Optic
There are several factors that make StarLink not only different to other satellite internet, but also to fiber optic. Firstly, StarLink is considered to be 10 to 40 times faster than other satellite internet such as Viasat. Despite it being relatively expensive, it is still cheaper than other satellite internet that offer less internet speed to StarLink. Even though StarLink only began in 2019, it is already the largest satellite constellation on Earth; and this can be considered a testament to its greatness. The most important and notable difference between StarLink and fiber optic is that the infrastructure needed to install fiber optic in rural communities is a lengthy process. So, despite the StarLink recently launching, it will undoubtedly reach more communities quicker than fiber optic will. However, fiber optic internet remains to be the fastest internet connection with up to 2000 Mbps.
Impact of StarLink on Society
StarLink has many important advantages and has made huge strides towards internet technology in general. Firstly, several communities in the US and Canada have immensely benefited from
the installation of StarLink as it has greatly improved their welfare system. An example of this was seen amongst the Pikangikum, a 3,000 person indigenous community in Ontario. Prior to the installation of StarLink, the idea of higher education and a healthcare system were not feasible due to lack of internet access, however this has changed thanks to StarLink. Secondly, it is considered to be the fastest and most affordable satellite internet connection. Third, StarLink can be considered crucial for not only rural communities, but where conventional internet is not easily accessible, such as airplanes and boats at sea.
Due to the fact that StarLink is a relatively new project, it is unclear for the most part how it will affect society and Earth as a whole, and it is debatable whether StarLink will have a positive or negative impact on society as there are different perspectives to analyze this. What is sure is that it will increase overall productivity, as rural communities will be able to access the internet. Secondly, communications around the world will increase.
All in all, the invention and constant improvement of StarLink has made it possible for rural communities to access the internet where conventionally impossible, and one day, people will be able to access the internet from anywhere in the world. Its unique features, relative to other satellite internet or fiber optic, allows it to advance and reach more people worldwide. Moreover, with StarLink proven to have the fastest and most affordable satellite internet, it has undoubtedly changed the way rural communities will function.
Urban Use of StarLink
Even though StarLink satellites offer impressive bandwidth, around 20 Gbps per satellite, this comes at a disadvantage in big cities. Every location has a fixed number of satellites; StarLink users would share the same bandwidth in big cities, resulting in slower speeds than suburban users. Wiechman from Forbes recommends at least 5 -10 Mbps bandwidth speeds for most households, not including gaming or 4k streaming that requires around 50-100 Mbps. Even if we consider homes only needed 5 Mbps, this will only allow 4000 users in each city and even less in real life; that amount is quite frankly inadequate.
Most cable or satellite internet companies offer hardware installation services in contrast to StarLink, which does not have a hardware installation service. This could be a challenge for many users as they must either install the dish and connections by themselves or hire a technician, which is not always possible. In addition, one of the most frequent issues is obstructions to the antenna's views of the sky, as StarLink requires a clear view of the sky to operate. This would cause a challenge for users living in high-rise buildings requiring permission from landlords and planning to install.
When compared to cellular internet, StarLink is not as portable. We can use our phones to access the internet from anywhere. The StarLink dish, on the other hand, is not at all portable. Even though users could theoretically attach their StarLink dish to an RV, this would not work, and their service would be interrupted. Users frequently ask StarLink if it's possible to travel with the hardware or move it to a different address. Their response is adequate but somewhat disappointing, stating that.
StarLink satellites are scheduled to send internet down to all users within a designated area on the ground. This designated area is referred to as a cell. Your StarLink is assigned to a single cell. If you move your StarLink outside of its assigned cell, a satellite will not be scheduled to serve your StarLink, and you will not receive internet. This is constrained by geometry and is not arbitrary geofencing. (Frequently Asked Questions)
The most common fear expressed by those surveyed was the likelihood of SpaceX imposing data limitations in the future. Only a few people believe that limitations are unavoidable (Sheetz). Many users would cancel the subscription unless StarLink caps between "750 gigabytes to 1 terabyte per month" (Sheetz, 2021).
Satellite Broadband Technology vs Nature
Lightning is another major issue for StarLink satellites; most users who opt-in for the
service lives in remote areas with little to no lightning protection. The chance of a strike to blow up all network gear and servers is relatively high. StarLink replied to customer concerns that the hardware "meets the U.S. National Electrical Code (NEC) grounding requirements and includes the necessary lightning protection". However, they did also mention that "any user who lives in an area with lightning should have the appropriate lightning protection installed by your local electrical code before using StarLink". No device is perfect, especially when it comes to lighting, so users should still plan preventative measures before installing the hardware.
There are real dangers of StarLink satellites are while they are in orbit. Many astronomers fear that StarLink and other companies such as Amazon's Blue Origin and OneWeb push for mega- constellations of satellites will negatively affect observations of the night sky, and they are. Astronomers depend on their highly sensitive satellites to detect observations of planets, stars, and other celestial objects.
Now astronomers observe satellites capable of outshining celestial objects; they appear as dozen glowing beads that fly across the night sky.
It gets worse; these mega-constellations are far from done. "U.K.-based OneWeb hopes to orbit about 650 satellites, possibly increasing to almost 2,000 in the future, and U.S.-based Amazon is planning for more than 3,000 satellites in its Project Kuiper constellation. StarLink "plans to launch at least more than 12,000 satellites". They also received approval from "FCC for another 30,000". Companies and countries like China are stepping up to the plate, with projections predicting that more than 50,000 satellites will be sent into space in the following decades. This will make it hard for astronomers to ignore these satellites or even impossible to take observations without having a stream of low earth orbit satellites photobombing their expensive telescopes.
Vera C. Rubin Observatory, located on the mountaintop in Chile, will use a giant mirror to capture faint, fast-changing objects. The observatory has a budget request of "473 Million Dollars". And it will be the most affected by thousands of low-flying communications satellites. With its large field of view and sensitive 8.4-meter mirror, the Vera C. Rubin Observatory is a "perfect machine to run into" these satellites, making data analysis and findings more difficult.
SpaceX's first attempt to mitigate the spacecraft's damage was to fly DarkSat, a prototype StarLink satellite with a black anti-reflective coating. Experts believe that recent ground-based studies of DarkSat in orbit found it half as light as a conventional StarLink satellite, which is a significant improvement but still well short of what astronomers say is required.
Not only is optical astronomy at risk, but radio astronomy has a negative impact due to low earth orbit (LEO) satellites. One hundred ninety-seven radio astronomy dishes of the Square Kilometre Array (SKA) in South Africa are housed in a radio-quiet zone where even a cellphone will be prohibited to preserve the array's views of the sky. That safeguard, however, will not protect the telescope from what could soon be overhead: tens of thousands of communications satellites blasting radio waves down from orbit. StarLink and other operators would interfere with one of the radio channels that the SKA expects to use, known as 5b, making it more challenging to search for organic molecules in space and water molecules, which are important markers in cosmology. Many believe that if the number of satellites in mega-constellations hits 100,000, the entire band 5b will be unusable. This would impact the traces that cosmologists use to figure out how dark energy is speeding up the expansion of the cosmos. A future solution would include having satellites operators "turn off their transmitters, move to other bands, or point them away" when flying over a radio observatory.
Even though the SKAO telescopes will be installed in remote locations far away from artificial radio frequency interference due to their extraordinary sensitivity, the areas have been designated as national Radio Quiet Zones (RQZ), which "provide legal protection from ground-generated radio signals such as cell phones, broadcasting transmitters, and Wi-Fi" (Garnier & Stevenson, 2020). But the RQZ status does not protect against interference from space transmitters. Due to historically modest number of satellites and their stable position in the sky, radio astronomy has observed in all frequency bands (most of them in geostationary orbit). As astronomers encounter a considerably more significant number of fast-moving radio emitters in the sky, deploying thousands of satellites in low earth orbit will unavoidably affect the situation.
So, What now?
We can conclude that the need to communicate has led to many technological strides that defined the world that we live in today. New Communications Technologies have created very effective and efficient ways of exchanging information over distance. This gave rise to the Digital Revolution
and is currently paving the way for a new Industrial Revolution that will reshape our society to the better. To achieve that however, we must take a precautionary approach when facing such emerging technologies. Especially when it comes to building a new communications infrastructure like StarLink’s. An infrastructure that is economically and physically irreversible.
Satellite communications projects of this magnitude will relieve a real-world communications problem, but at what cost? And is a Proactionary approach to projects of this impact on society and nature enough?