Towards an automated network future? Make sure you have prepared your business case

Paid function We expect software development teams to scale rapidly and at scale these days. But can you be sure your data center and network operations teams can keep up?

Raw network speed and data center capacity are only part of the equation, says Jon Lundstrom, director of business development for Nokia’s Webscale organization. Networks must be able to support cloud-native architectures for cloud-native applications that “don’t live in one place, they live like containers, and they’re potentially short-lived.”

At the same time, DevOps teams have a ton of automation. “They want to consume the network, whether it’s a private cloud or a public cloud, in the same way.” They want the network to be a black box, from which they can extract high- or low-level information covering security or connectivity, via an API, and “then make the network happen.”

Achieving this presents a cultural and technical challenge for the NetOps team managing the network given that it has traditionally focused on stability and minimizing risk, Lundstrom said. “They can absorb technology at a certain rate. But it’s been hard for them to keep up with the DevOps side of the equation. »

The problem is that traditional network planning practices don’t fit this new world, according to Lundstrom. Typically, the modeling was revenue-driven, with data center operators focusing on “new revenue” that was, theoretically, on the table. Planners would make assumptions based on the new services they might offer, calculate the investment costs needed to deliver them, and determine the payback period.

However, this approach hides the internal operational costs of operating the network over the long term and is increasingly out of step with new engines and the possibilities offered by modern architectures in terms of optimization and automation.

In reality, capital expenditures are “the smaller part of the larger business case, which is really the optimization or cost side of the equation. Business cases on the cost side are more complicated, because they actually take an introspective view of where they spend money today.”

The adoption of newer technologies by Nokia customers has already enabled a good degree of automation, says Lundstrom. “But there are still a lot of areas that we think can be optimized. And again, it’s about trying to meet the needs of the DevOps team, which needs to go faster and have more scale without sacrificing that reliability. »

Sometimes introspection is good

Nokia’s response was to develop an online service data center fabric business case analysis tool which takes a much broader view of the network lifecycle, encompassing both the Day 0 design and Day 1 deployment stages, as well as the much longer and ongoing Day 2 operations and management phase +.

This allows network and data center planners to dig deeper into their current configuration and compare it to different future scenarios, including the move to Nokia’s Datacenter Switching Fabric, which encompasses its Fabric Service System, Linux Services Router, and that of Nokia change hardware platformand which explicitly aims to allow NetOps teams to develop their own optimizations.

Lundstrom says it’s important to remember that with any data center project, whether it’s a field effort or a redesign of an existing on-premises or colocation space, “the actual network part of the investment doesn’t necessarily end up being the biggest part. They are either servers or bricks and mortar.

At the same time, adding modern hardware means a cost for the NetOps team to “learn this new technology, understand how it can gain, and then effectively, become efficient, and in fact , realize these gains”.

“So what we’ve done is we’ve really focused on operational costs, and what effort in total man-hours will there be between the two different solutions? And it’s really the combination of automations and optimization of environments.

With respect to the underlying data and assumptions of the model, there is an element of “tribal knowledge” that both vendor architects and product managers contribute to. The model also draws on the experience of dozens of customers across multiple continents, ranging from enterprises and service providers to webscalers and Tier 2 carriers.

He also called on the experience of engineers within Nokia’s Bell Labs organization, who “have a great deal of engineering knowledge as well as experience with customers.”

The model covers dozens of potential job functions that could be affected by the implementation of new technologies, many of which might not be apparent in traditional scenario planning.

“So it’s kind of an education like, ‘Hey, here are the 50 job functions you have. Which ones will be impacted by this new technology? Some of them are going to be positive, but for example, a new integration is going to take effort, isn’t it?

Similarly, Nokia has “ensure that our business case reflects the fact that new integrations wouldn’t necessarily be necessary if you were just using the same technology”.

Naturally, power density and durability have been built into the tool, though realizing space and power-efficiency benefits relies on using new hardware, says Lundstrom. “In the tool, we compare the power or space per gigabit of throughput needs between the existing Nokia solution and the new one. And these savings are really due to the evolution towards the next generation of silicon.

He cites two practical examples of how the Nokia platform and business case tool can highlight benefits that traditional planning exercises might miss.

Unboxing, optimized

In the design phase, for example, customers will at some point need to get their hands on potential purchases of technology in their own labs. But “laboratory environments are large and expensive, and they are difficult to manage.” Simply setting up a lab with the proposed kit can take months, with each round of testing followed by another period of time-consuming reconfiguration.

“If you can optimize this test case and technology evaluation, perhaps in a virtualized environment, instead of a physical environment, like we did with our Digital Sandbox, we think that can potentially significant time savings must go through the design process (see Making Data Center Networking as Expendable as Compute).

Similarly, at the deployment stage, it is relatively simple to determine the “effort” of an individual to unpack a switch, add it to a rack, screw it in and power it up.

“So how do they know which cables and which fibers to plug into which ports? It ends up being a very manual process of generating what’s called a cable map,” says Lundstrom. But this could potentially be partially automated using a design tool to generate an image of which cables should connect which devices. This would make life easier for the installer and would mean a great reduction in effort and cost.

The overall results can be eye-opening. For an “average” installation of 3,200 servers spread across two data centers, with 10GE/25GE access for the servers and 40GE/100GE uplinks for the spines, and 32 full-time employees, the tool reveals a ” effort” total of 104,217 hours over four years in all professional functions and tasks. Adopting Nokia’s SR Linux and Fabric services system, with new silicon providing 10GE/25GE/100GE access for servers and 100GE/400GE uplinks, would reduce the total effort to 62,728 hours, according to the tool, a saving of 40%. Day 2+ savings would be around 55%.

Cumulative savings over four years in all operating phases and tasks

Nokia said it will update the Business case analysis tool as it continues to gather more data points and customer feedback. At the same time, anyone using the tool can adapt the underlying assumptions of the “100 and 200 levels” to reflect their own starting point.

“But the 300-level assumptions are the ones that are kind of at the heart of the system. Those are the ones where you would come see us,” Lundstrom says. “Then we would work on a customized version of the analysis.”

Perhaps the hardest thing to model is the biggest impediment to change – the potential inertia that stems in part from the NetOps world’s traditional aversion to risk and disruptive change.

As Lundstrom notes, “Culture is one of those things that takes a long time to change, and that’s nothing you can buy with technology.”

Sponsored by Nokia.

About Jon Moses

Check Also

Secure your home assistant installation with a free SSL certificate

Available for Windows, macOS and Linux systems (including Raspberry Pi), the open source Home Assistant …