Working with our Vertiv Sales team enables complex designs to be configured to your unique needs. If you are an organization seeking technical guidance on a large project, Vertiv can provide the support you require.

Learn More

Many customers work with a Vertiv reseller partner to buy Vertiv products for their IT applications. Partners have extensive training and experience, and are uniquely positioned to specify, sell and support entire IT and infrastructure solutions with Vertiv products.

Find a Reseller

Already know what you need? Want the convenience of online purchase and shipping? Certain categories of Vertiv products can be purchased through an online reseller.


Find a Reseller

Need help choosing a product? Speak with a highly qualified Vertiv Specialist who will help guide you to the solution that is right for you.



Contact a Vertiv Specialist

Need help choosing a product? Speak with a highly qualified Vertiv Specialist who will help guide you to the solution that is right for you.



Contact a Vertiv Specialist

Decoding the Edge: A Systematic Approach to Edge Deployments

Alex Pope •

In October 2000, two researchers from UC Berkeley published the first study to quantify, in computing terms, the total amount of new and original information created and stored on physical media in a given year. In 1999, that number was 1.5 exabytes. This year we’ll hit 74 zettabytes, and we’ll reach 149 zettabytes by 2024. One zettabyte is equal to 1,000 exabytes.

What caused the spike? Personal computers date back to the 1970s and were fairly common in many homes as early as the 80s. The internet went mainstream in the mid-90s. These developments drove increases in data generation, but the real trigger was a few years away.

The first 3G-enabled smartphone hit the market in 2001, the first iPhone drove commercial adoption to new heights in 2007 and was followed shortly by the iPad in 2010. By 2026, Ericsson predicts we will be generating 226 exabytes of mobile data traffic per month. When we consider computing and data generation, there is the time before mobile computing and the time after.

This is relevant today because of the impact it eventually had on the data center industry. As mobile applications became more advanced, and consumer expectations around performance and latency more demanding, it drove computing out of the traditional data center and closer to those consumers — to what we recognize today as the edge of the network.

Growth of Edge Computing Calls for Categorization

Consumer growth in mobile computing wasn’t the only driver, of course. That demand for low-latency computing became ubiquitous, with everyone from Wall Street to Walmart measuring success in milliseconds. The move to the edge has been the most significant post-cloud trend in the data center, and the introduction of new applications and technologies in the areas of intelligent transportation, telehealth, and countless others ensure the edge is here to stay. The pandemic-fueled rise in remote work and computing only accelerated the transition.

In the early days of the edge, one of the most significant challenges facing our industry was simply understanding what “the edge” really was. For some, it was a simple IT closet. For others, it was more or less a micro data center. The configurations and applications they supported were so diverse and dissimilar, any broad discussion of the edge felt pointless.

We first addressed that in 2018 with the introduction of four edge archetypes — a way to categorize edge deployments based on use cases. We developed the archetypes to better understand the edge, and we use them to equip edge sites to meet the needs of the organizations and end users that rely on them. The four archetypes are Data Intensive, Human-Latency Sensitive, Machine-to-Machine Latency Sensitive, and Life Critical, and you can find descriptions of each in this white paper.

This was a good start, but it was just that — a start. Edge applications are only one variable, and a virtual one at that. The physical assets enabling these applications must live at these edge locations. Recognizing that, we applied a similar process to categorizing those locations at the edge and, much as we did with the original archetypes, we found commonalities. Today’s edge networks tend to follow one of four deployment patterns:

  • Geographically Disperse: These sites are similarly sized and spread across large geographies — typically a country or region. Retail, with stores scattered across a chain’s footprint, or consumer finance, which includes bank branches, are good examples.
  • Hub and Spoke: This also typically covers a large area, such as a country or region, but the sites are organized with multiple smaller deployments around a larger hub. Communications and logistics networks tend to embrace this model.
  • Locally Concentrated: These are smaller networks, often servicing campus settings, such as those common to healthcare, education, and industrial sites. They also tend to feature a number of small deployments connected to a larger central facility.
  • Self-Sustained Frontier: This pattern, with widely spread footprints ranging from regional to global, consists of the largest individual edge sites. These sites carry many traditional data center characteristics but tend to be of modular construction. These are often employed by cloud providers to serve sizable areas. Smaller versions are commonly used for disaster recovery as well.

This categorization was valuable. It gave us two ways to define edge sites — by archetype (use case) and by geography. But there is more to consider. The physical environment and corresponding characteristics of sites within a given group adds a final layer of site analysis we can use to quickly and easily configure these edge sites to meet the specific needs of our customers. Those categories are:

  • Conditioned and Controlled (<6 kW per rack or >6 kW per rack): These are purpose-built spaces that are climate-controlled and secure. The only difference in sites is rack density.
  • Commercial and Office: These are occupied spaces with existing, but limited, climate control and sites that are typically less secure.
  • Harsh and Rugged: These require more robust systems and enclosures to protect against large amounts of particulate in the air. They often are industrial sites with the threat of water exposure and in proximity to heavy traffic or machinery. They lack climate control and are far less secure.
  • Outdoor Standalone: These are outside and unmanned sites, exposed to the elements and requiring a shelter or enclosure. They can be in remote locations that require some time to reach for planned or unplanned service.
  • Specialty: These sites likely share characteristics with one of the above categories but must be handled differently due to special regulatory requirements that could be tied to application, location, or other factors.

This defining work has established a clear and previously nonexistent methodology to help us understand (1) the IT functionality and characteristics each site must support; (2) the physical footprint of the edge network; and (3) the infrastructure attributes required of each deployment. With those data points, we can configure, build, and deploy exactly what is needed faster and more efficiently while minimizing time on site for installation and service. Simply put, it allows us to bring an element of standardization to the edge that previously seemed impossible. As we’ve seen in the data center, standardization reduces timelines and costs, and streamlines the deployment process for our customers.

If you manage a network with edge assets, how would you categorize your sites across these three areas? Have you ever been asked, “What’s Your Edge?

Partner Login

Language & Location