The Internet Structural Engineer

 

 The Internet Structural Engineer


The internet is a vast web of cables, routers and servers that all need to work together seamlessly for it to function as intended. The person who ensures all this hardware is interconnected in the proper way and running optimally is called an Internet Structural Engineer.

This blog will explore the qualities that make someone qualified to be an internet structural engineer, what they do on a day-to-day basis, and how their profession has evolved with technology over the years. I'll also discuss some of the cool robots and equipment they have.

BRINGING DOWN THE WI-FI I GUESS...

The history of the internet can be traced back to ARPANET, a network used mostly for research and development funded by defense contractors such as Raytheon in the 1960s. It was founded by a group of scientists who wanted to create a computer network that would be accessible to everyone around the world. They were inspired after hearing students protesting outside the Pentagon on October 1st, 1967 during a nationwide moratorium day against the Vietnam War. The protesters' frustrations boiled over into an act of civil disobedience: They wiped out all existing computer networks in Washington D.C.

For something to be "structural", it means its integrity is maintained throughout the whole structure. If a building's structural integrity was compromised, it could collapse and everyone inside of it could be in danger. This could not only be people inside the building, but also people outside of it who may get hit by debris if the structure collapses. For example, if a building collapsed near an oil rig, and this started a fire, then the structural engineer would have failed his/her task and everyone working on that oil rig would face an uncertain fate.

Luckily, the internet was built with enough redundancy and redundancies on multiple levels to avoid such a catastrophe. During my talk, I'll talk about the different levels of redundancy that this internet has, which is something I've been researching for the past three years. After all, it's pretty cool that we can use these redundant networks for everything from storing our music to watching Netflix. But how did this remarkable system evolve?

WHAT ARE THE DIFFERENT LEVELS OF REDUNDANCY?

The internet consists of two main components: hardware and software. The hardware is what the software runs on, and the software is what tells the hardware how to operate. The most important thing to remember about internet structure is that it's based off of a few plane principles:

1) Redundancy (duh!)

2) Resiliency

3) Proximity

The greater a structure's redundancy and resilience, the more likely it is to survive various disasters. A resilient design means that its construction allows it to withstand pressure while maintaining its structural integrity. This design is usually borne out of an analysis of the environment in which it will be placed and how it will be installed.

The way this works is by segregating the different parts of a building into zones that allow for maximum pressure for system failure and collapse, and minimum pressure for survivability. For example, in a hospital, there are certain areas where you wouldn't want to be at all, such as between the walls or near the ceiling. This would be because there would be more intense pressures on these areas due to them being closer to support pillars or other structural elements. Under normal conditions, they wouldn't move during an earthquake or other disaster, but if certain systems failed (such as a vent), then they might collapse. This same principle can be applied to networks.

The software, which would communicate with the hardware and make sure they're communicating properly, is often called the Application Layer (AL). This software is a bit more complicated, especially in the modern world. In earlier days, communication between application layers was often a direct connection between two hosts (such as machines on the same network), but now a lot of times it's done through another layer called Hypertext Transfer Protocol Publishing (HTTP/Web) or Transmission Control Protocol/Internet Protocol (TCP/IP). This makes it more difficult for people to figure out exactly what's going on with applications when they can't see from one host to another.

The degree of redundancy in the AL is incredibly important, especially when communicating with a device that could be physically broken. For example, when you're talking about wireless internet, all wireless devices (including routers and wireless access points) must have multiple layers of redundancy in order to ensure that they're communicating to each other regardless of what happens around them. This means that if a router that's broadcasting packets goes down, then everyone still can use the internet without a problem.

The physical structure itself has also grown smarter over the years to accommodate for more complex network setups. When I was growing up (at least back then), there was only one type of flat cable: Cat-5e. Nowadays however, there are many different types of CATs, each meant for a different purpose.

The standard UTP (unshielded twisted pair) is used for things such as laptops and other wireless devices. It's flexible, easy to strip, and it has enough points along the cable to use for connectivity (four wires with eight points each). Another common type of cable is Shielded Twisted Pair (STP). This one is used in new data centers where there are no environmental concerns, such as temperature. It's more expensive than UTP since it's more durable and provides more reliable service because it's shielded to stop outside interference from dropping packets.

Yet another type is known as GPON (Gigabit Passive Optical Network), which is used in optical fiber networks that are used for internet backbones and telecom. Unlike other CATs, these cables don't have any points, so there's absolutely no chance of it breaking.

In the future, even more types of CATs will be available to accommodate for different needs and circumstances. As technology continues to advance, we'll create smarter and smarter structures in order to survive a range of disasters. It's pretty cool that the internet has this capability, but it also highlights how vulnerable our current infrastructure is if something were to happen – we'd lose data before physical structures were in danger.

CURRENT PROBLEMS WITH REDUNDANCY

There are a few problems that have been identified with the current internet redundancy. Let's talk about the first one, which is congestion. What causes congestion? I'll tell you why it's called that: A single packet can take one of three routes through the internet, and if there's high demand for all three paths, then things can get busy. Think about traffic going down a freeway – if only two lanes are open, how does watermelon get through? That's because cars coming from two different directions both have to take the same route on their way to the highway to avoid collisions.

Conclusion: No matter how much redundancy is used, there's only so much bandwidth available.

So, what causes congestion? It's two things: one is the demand for a specific path, and the other is how effectively all of the packets are handled along their routes (called protocol efficiency). Protocol efficiency can be measured by several factors including latency and packet loss.

Latency can be measured in milliseconds (milli seconds). If you want to measure the average latency of a router across your network, simply calculate the time that it takes for each packet to pass from its point of origin to destination by dividing it by the number of packets that have already passed through it. This is called round-trip time (RTT).

Post a Comment

Previous Post Next Post