The major data center mistakes in 2023

09.08.2023 832 0

Data centers are in record demand these days. The need for data and processing power is constantly rising, driving companies to seek more hardware resources. This means a rush to build more data centers, improve the current ones, and to utilize resources to the maximum.

This is a great opportunity for data center operators and clients alike, but it also creates some risks. Due to the mass rush to get as many data center resources as possible, both operators and clients alike can make mistakes. Some of them are minor, but others could get expensive. So, it would be wise to learn from the mistakes others have made in order to lower the risk of doing the same thing and minimizing losses.

Some of the common data center mistakes haven’t changed much during the past several years. Others though are newer and are a consequence of the new technologies that are gaining popularity. For example, generative artificial intelligence (AI) which according to a new report is generating a “tsunami” of demand. Rushing into new things is always risky, especially when you have big expectations about something relatively new and unknown.

The “tsunami” of AI demand

The report in question is was conducted by datacenterHawk and investment firm TD Cowen. It found there’s a massive jump in the demand for data center leasing. The reasons – cloud providers are racing to secure capacity for AI workloads, DataCenterFrontier reports.

As such, datacenterHawk reports 835.6 megawatts (MW) of absorption in North America in the second quarter of 2022, a record for a single quarter. Also, TD Cowen says its analysis finds “a tsunami of AI demand,” with 2.1 gigawatts – that’s 2,100 MW – of data center leases signed in the last 90 days.

“There’s still such a strong demand appetite out there, despite the headwinds that we see. It’s a strong sign for our industry that demand is here, and I don’t see it going anywhere”, says David Liggitt, CEO of datacenterHawk.

All of this happens while vacancy rates for data center space are at a historic low in North America at 3.16%. There’s just 285MW of available power. It’s expected that the demand growth will continue as AI is just getting started. Cloud platforms are already the biggest data center customers and often even seek pre-leasing entire centers before they are even built.

Whatever space is left is coming at a high price, thus it could drive smaller customers away. “Operators with capacity coming online in less than 24 months can charge a premium for their capacity,” TD Cowen wrote. “This in our view reflects the growing scarcity of data center capacity as hyperscalers look to secure their access to future compute … This scarcity is also reflected in enterprise behavior as the growing requirements of hyperscalers have increasingly crowded enterprises out of the data center market.”

The new technologies are forcing changes

The rise of AI and other related technologies are certainly game changers in a lot of ways. As the report shows, they are already changing the way data centers are used, developed, maintained and even leased. The DCF 2023 forecast already foresaw that and said: Data center space will be harder to find and could cost more, particularly in the second half of 2023.

With this in mind, it becomes even more important to improve and optimize every little detail about data centers. This includes the entire process and all aspects. It starts with the data center design from the building itself upward. Some of the most important things to not overlook are ensuring there’s enough space between the hardware to avoid excessive heat buildup. But it’s also key that the plan has envisioned enough space for future expansions, as we can see chances are they will be needed rather quickly.

Of course, this has to be taken into consideration along with other factors such as available lot space, local regulations, surroundings, etc. With that said, there’s a risk of ending up with too much free space after the (re)construction, which can introduce difficulties for cooling or other issues.

Another challenge is data center thermal management. This is part of the location and of the design. All of these features have to be taken into account when choosing the design criteria. Space planning should feature a lot more than hardware location. “Many organizations use only IT equipment when determining their space needs. On the other hand, mechanical and electrical equipment need a significant amount of room. Many businesses overlook the square footage required to house office space, equipment yards, and IT equipment staging areas. As a result, it is critical to establish your design criteria before creating your space plan. There’s no way to visualize the entire site needed to satisfy your complete requirements without it,” says SecuireIT Environments.

All of this can “tempt” operators to think they have to overcomplicate the design. Instead, simplicity is better and it will allow easier maintenance, optimizing and lower overall costs and issues in the long run. Especially if the goals allow to go for a modular approach.

Power and environment

Other data center mistakes often come via the power management and sustainability. Power is key for data centers and it requires a lot of effort. This includes power sources, management, sustainability, failure processes, etc.

“The objective of data center design is to maximize uptime while minimizing power consumption, so you need to choose the right power equipment based on projected capacity. To ensure adequate power, you may use redundancy calculations that project demand for three times the power the servers actually use, which is wasteful,” says PCX Corp. It advises considering long-term power consumption trends and to choose equipment that can deliver enough power without waste.

For that the cooling also has to be on point. As expected, cooling is a hot topic in the world of data centers. It’s important for multiple reasons, including power consumption and sustainability. Cooling technologies change constantly and give data center operators a lot more choices and flexibility in creating the ideal solution for their needs. This requires additional effort and care when selecting the solutions, because a wrong step would mean ending up with an expensive, bespoke solution that’s not performing as expected.

Long term management

This is where the most mistakes happen. Usually they are smaller and difficult to catch, but can mount as time goes by.

Some of the mistakes can be with security. For example, a study by IBM shows that 52% of data breaches are caused by malicious attacks. In most cases the causes are compromised credentials, phishing, cloud misconfiguration, vulnerability in third-party software and physical security compromise. As we can see, the majority of those causes are very human-related, i.e. they are with high risk and chance of happening. Active prevention is key.

Automation can be both an asset and a risk. According to Cisco 49% of organization already deploy some kind of automation. With the rise of AI, that share is going to jump a lot. For data centers automation can help with security and management, but if done wrong, it can wreak havoc. For example because of wrongly chosen priorities. Or the automation has been poorly configured. “Top use cases of data center automation include provisioning and orchestration, virtual machine management, parts management, server power budgeting, scheduled charts and reports, and thresholds and alerts,” says SunBird. So, every detail must be carefully evaluated to see how it can be automated.

Money, money, money

All of this brings us to the costs. Poor cost planning and estimation is almost always the main and the biggest data center mistake. It has been, it is and it probably always will be. And there is a plethora of reasons for that.

These mistakes can be underestimating or overestimating the timeline – having the wrong expectations of the needed resources, expansions, upgrades, etc. can bring massive costs down the line. Failing to recognize challenges, issues or other factors will also have an effect on costs. One such factor could be dependency on a certain vendor or not enough interoperability between equipment models, etc.

“It’s easy to fall into the mistake of focusing solely on capital expenditure; building or extending can be expensive. Capital cost modeling is crucial, but if you haven’t factored in operational and maintenance expenses (OpEx), you’ve jeopardized your company’s long-term success. The maintenance costs and the operating expenses are the two most essential components in calculating data center OpEx cost modeling. The maintenance expenditures include ensuring that all necessary facility support data center infrastructure is maintained correctly,” says SecureIT Environments.

“Organizations that are operating their own data centers are making capital expenditure investments with associated upfront costs as well as recurring upgrade costs. A typical CapEx approach is focusing resources on non-core business of the organization. Third party IT hardware maintenance can effectively delay the need for equipment refreshes while saving 30-40% compared to OEM support contracts. In both CapEx-oriented and OpEx-oriented approaches, organizations need to optimize their usage of resources so that they are only paying for what they really need,” says Park Place Technologies.

The key take away is that in order to avoid data center mistakes or to at least minimize the effect of such, the main requirement is vigilance. By being proactive, data center operators (and their clients) can enjoy more of the benefits of their setups.

Leave a Reply

Your email address will not be published.