Skip to main content
Regional General Manager, APAC
Date: October 22, 2018

The Age of Smart Cities Can’t Afford Zombie Servers—Here’s Why

The Age of Smart Cities Can’t Afford Zombie Servers—Here’s Why

A year ago, I came across a Computerworld story on a research paper which found that 25 percent of all physical servers (and up to 30 percent of all virtual servers) are “zombie servers.” These "zombies" are running with either no external communications or contribution to workloads for at least the last 6 months. 

This research was conducted in 2017 by Jonathan Koomey, a research fellow at Stanford University, and Jon Taylor, a partner at the Anthesis Group. According to the 2015 report of the same study, it's estimated that globally, there are more than 10 million zombie servers, including standalone servers and host servers in virtual environments. All of this translates to a data center capital of more than $30 billion--all for doing nothing.

Zombie servers are a tax burden on power, budget, resources, and our environment. They are also a well-known problem in the enterprise world. To help our clients with this problem we actually developed a service that helps IT teams find the “lost” equipment connected to their network. To be fair, sometimes these servers are kept idle for backup or seasonal demand.  However, more often findings show that idle servers mostly contribute to a drain on operational cost. Unfortunately, this waste is an issue that's not gone away with the advent of cloud computing.

Sticker shock is a real thing

Even as organizations are migrating their workloads to the cloud (dedicated or hybrid), there are indications that they aren’t keeping track of how resources are consumed (or in this case, not consumed).

Often the reason organizations have turned to the cloud is to lower overall costs while addressing the on-demand needs of users across platforms and devices. However, many organizations are getting sticker shock from unanticipated spikes in their monthly cloud computing bills. 

What organizations have discovered is that the so-called simplicity of the cloud comes at a price. Once an organization has bought into cloud computing, the associated costs aren’t always obvious. Resources are being used (and billed for) even when they are not needed. 

Unfortunately for clients, vendors often fail to fully review the important complexities that come with switching to the cloud. This leaves clients traversing a new cost management landscape without a clear roadmap or reliable guide to follow.

Thinking about cloud computing as a resource without ceiling is a big mistake

Smart cities use information and communication technologies to improve operational efficiencies and increase the quality of government services and citizen welfare. Singapore is an example of a contemporary smart city, but globally smart city data and processing needs are growing; becoming more complex and demanding. 

The dominant narrative in the tech industry is that most data is best crunched centrally, in the cloud. The counterpoint? Many emerging applications require quick information processing for on-demand insights. One example would be autonomous vehicles operating in “smart cities,” where a slow-down in processing could lead to a deadly accident.

Smart cities’ rise are tied directly to the internet of things (IoT). It’s my belief that edge computing is going to drive even greater demand as “smart city” solutions continue to mature. The growth of the IoT is one of the primary drivers of migrating centralized cloud processing of datasets to edge networks and intelligent devices.

Perhaps the hype around cloud and edge networks and intelligent devices is just that, hype. But I don’t believe it is. The demand for edge computing is starting to feel very real as cities build out networked surveillance for safety, security, and improved urban planning. Networked surveillance coupled with smart sensors are increasingly being used to monitor traffic conditions and simultaneously aid in the planning of real estate in urban areas.

Furthermore, many applications are now “virtualized,” meaning they exist separately from any specific type of hardware. Code can thus be packaged in digital “containers” and easily moved around within data centers and ---increasingly -- closer to the edge.

What it boils down to this is this for clients: The cloud may not be the right resource for everything. They may have an excess of underutilized equipment that they’re paying too much for. But we’re all living in the same, figurative, smart city. 

For all of us to grow, we need to be open to a new strategy that focuses on the goals of sustainability and efficiency, without limiting thinking to a single type of technology to achieve those goals. The mistakes we make in underutilizing our resources, or not ensuring our budgets are commensurate with the demands of the business, mean we'll keep getting shell-shocked by cloud computing bills or be burdened by over-resourcing beyond our needs. 

The last word

Keep track of what you're spending. Do this by drawing up a framework with rules in place for how teams consume resources, including accessing accounts and subscriptions.

And remember, we are all part of the 64 billion dollar IT services and infrastructure industry. When we lose sight of where we’re spending resources, forget that they exist or don’t utilizing them to the best of their ability, it becomes a problem for all of us, as an industry.

We’re all in the same “smart city” and there are serious implications not just to a single business, but to our entire planet. We can’t live in a city that is overpopulated by zombies, only exists in the cloud, or isn’t brave enough to look over to the edge when needed.

Have a think about it. Let me know your thoughts about how we can collectively do better on this front.

Questions? Comments?

Talk to our team of expert engineers, product managers, and technicians by emailing us at experts@curvature.com