Data centers are a critical part of digital infrastructure. As such they are constantly developing, and the need for them is also rising which means the challenges they have to solve are increasing. And there are a lot – energy, sustainability, talent, hardware and so much more.
So, what can data center operators do? Let’s find out, by exploring the current main challenges they face. A recent Global Data Center Survey by Uptime Institute focuses on the latest trends in the area:
The hardware game is strong
The first thing that comes to mind when we think of data centers is servers, of course. A lot of servers. Well, that continues to be the case, and now servers live longer than ever before. The survey, which features responses from 800 data center operators and 700 data center suppliers, designers and advisors, shows that in 2015 about 34% of respondents said they keep their servers in operation for at least five years. In 2022, the same is said by 52% of respondents.
Uptime cites several reasons for this trend. One of the most obvious is the semiconductor availability which has been quite scarce for the past couple of years. Component shortages drove prices up and increased waiting times, so, a lot of organizations are postponing their upgrades until times are better. The same goes for data center operators which have to make the most out of the hardware they already have.
The Uptime survey also discovered that new hardware might not be as beneficial as before especially when it comes to power efficiency. Usually, the new releases offered big improvements in power efficiency, but now that’s not always the case. Often the optimizations are incremental and thus there’s not much incentive to go for the latest generation every time. Instead, it’s more cost effective to skip a generation or two to get the bigger accumulated gains later.
Data center outages are still a problem
While end users rarely feel the effect of a data center outage, these events are still happening and Uptime’s survey shows that it’s relatively widespread. In 2022, 60% of respondents say they had at least one data center outage in the past three years. There’s some good news, as this percentage is dropping steadily. It was 78% in 2020 and 69% in 2021.
Also, there’s a drop in “outages deemed serious/severe” by the operators. On average, for all Uptime surveys so far, those outages were 20% of all. In 2022 they were 14% as deemed by the operators themselves.
Despite all of this, the overall number of outages continues to rise year-over-year, because the data center footprint is also growing. At least the frequency of these events isn’t going up at the same rate, so this shows that the efforts of operators are paying off.
And they should, as the study also shows that each data center outage is now becoming more and more expensive. 25% of respondents say their latest outage cost them more than $1 million in both direct and indirect costs. That’s quite a lot more when compared to 2021 where 15% of participants said the same. Back in 2022, 45% of respondents say their most recent outage cost them between $100 000 and $1 million, compared to 47% a year before.
Uptime says there are several reasons for this. Among them are obvious ones like inflation, bigger fines from regulators, and increased overall costs for everything. Despite that, the biggest reason is the indirect costs because of the ever so increased value of the economic activity via digital services. Each data center outage thus brings bigger revenue losses.
And what’s the main cause of outages? Power issues. For 44% of respondents, it was some sort of power issue. Long behind in second place is network issues as the reason for 14% of survey participants, followed by IT system problems at 13% and third-party problems at 8%.
Y u no backup, still?
Ah, yes, the good old meme is here. And for a good reason. It’s old. And do is the issue of not backing up data properly. “Organizations are becoming more confident in using the cloud for mission critical workloads, partly due to a perception of improved visibility into operational resiliency,” Uptime wrote. “However, other data suggests cloud users’ confidence may be misplaced… Users appear more confident that the cloud can handle mission-critical workloads, yet over a third of users are architecting applications vulnerable to relatively common availability zone outages.”
The survey shows that 63% of respondents wouldn’t place mission critical workloads in a public cloud. That’s less than 74% from 2019. 21% also say they have adequate visibility into the resilience of public clouds. Their main worry is availability zones. Basically, respondents still avoid backup apps in multiple cloud zones, which increases the risk for them.
The dreaded talent gap
The global IT industry has been suffering from the lack of enough skilled employees for years and years. Some areas are hit worse than others, and one of them is the number of staff needed for data centers. It keeps growing. Uptime notes that in 2019 the global data center segment needed a further 2 million full-time employees, and it estimates that in 2025 that need will rise to 2.3 million along with the creation of entirely new categories and specialized skills.
In total 53% of data center operators say they have difficulty finding the needed employees. This is a jump from 47% in 2021 and 38% in 2018. Also, 42% say their talent is getting poached by competitors. In 2018 the same was said by 17% of respondents.
“The staff shortage affects almost all data center job roles globally. In mature data center markets, such as North America and Western Europe, much of the existing workforce is ageing and many professionals expect to retire around the same time, leaving data centers with a shortfall on both headcount and experience. Hiring efforts are often offset by jobseekers’ poor visibility of the sector. Efforts to bolster talent pipelines by attracting career-changers to the data center industry are still nascent”, the report says.
Sustainability is also a challenge
The data center industry prides itself on its sustainability efforts, and it does a very good job indeed, but there are still a lot of issues not being tackled.
For example, 63% of data center operators say they expect local authorities in their region will start to require public environmental data reports within five years, but only 37% actually collect and report carbon emission data (4% more than 2021). Surprisingly, just 39% report their data use and that’s a massive decrease compared to 51% in 2021.
“While efforts to reduce power footprints and improvements in efficiency have had a dramatic impact on data center compute capacity and power usage, there’s no doubt that greater innovation will be required as data centers continue to rise in both power consumption and complexity,” writes Brian Korn, Vice President of Data Center Computing for Advanced Energy for Data Center Frontier.
He gives a simple example: “If you consider a data center with a power use of 10 MW, servers may consume 50 per cent of that energy with a power usage effectiveness (PUE) of 1.6. A two per cent increase in the energy efficiency of the server power supply leads to a 1.6 per cent decrease in electricity use. That translates to 1.4 million kWh saved annually, which is equivalent to reducing CO2 emissions by more than 21 million pounds. At $0.07 per kilowatt-hour, that’s $98,000 in savings. In large data centers, which may have an even higher power consumption and billing rate, these kinds of savings add up fast.”
It seems easy to optimize energy by a mere 2%, but it’s not. Especially when you consider additional factors like the constant increase of servers and the need for power. More platforms are moving from 12V to 48V to accommodate additional needs and that’s a good thing. It can bring substantial reduction of costs by better thermal performance, increased power density and optimized efficiency.
Korn says that about 15% of data centers have moved to 48V, and that about half of leading hyperscalers will move to it by the middle of the decade. This will unlock the doors for smaller players and traditional enterprise deployments too.
Another area where data center operators will continue to improve is site selection. Being “green” is now a very important trait for data centers, not just for their operators. It can also bring bigger clients as they too seek to show to their customers that their entire business, including partners and suppliers, are also doing their part for sustainability.
But achieving and maintaining that sustainability isn’t easy everywhere. Some places are better than others. “The decision to build a fundamentally sustainable data center literally needs to start with the ground. Unlike energy efficiency and technologies that improve workload and performance features such as power density and power distribution, a sustainable data center needs to start with a site that meets the criteria established for a sustainable data center site. It’s rarely, if ever, that something can be added after the fact,” says analyst David Chernicoff for Data Center Frontier.
This is why it’s important to consider a lot of variables when choosing a location for a new data center. Among them are power sources, water supply, but also job availability, community inclusion and more. You also have to be willing and able to adapt and adjust the approach and criteria as you start to research the area in more detail.
“Data centers are most often purpose-driven now. They are built for specific reasons to support specific business goals,” Chenicoff adds. As such, data centers evolve differently than they did a few years ago. Operators have to be ready and willing to adapt to those changes fast in order to be better prepared to overcome the challenges they will continue to face.