Data Center Admins Know that the Key to Unlocking the Potential of AI in 2021 is to Improve IT Agility and Efficiency and Accelerate Time-to-Value
Blog provided by Nader Soudah, Vice President, Global Channel, Liqid Inc.
Data centers are going through a generational transformation in which the architectures designed to facilitate cloud computing cannot keep up with the demanding, uneven data performance requirements of emerging applications in artificial intelligence and machine learning (AI+ML). In order to successfully deploy these technologies, IT users will need to transform their existing hardware infrastructures, which have become inefficient in addressing surges in data and generally uneven workloads associated with AI+ML.
Organizations will need to facilitate these changes as quickly as possible and approaches differ greatly. What
most can agree on is that data center admins must significantly improve hardware efficiency and flexibility to improve overall time-to-value for solutions across vertical markets. IT agility, i.e. how quickly hardware resources can be reconfigured via software in order to accommodate these diverse workloads, will be central to this transformation.
Our customers are addressing these challenges in ways big and small. Below are their five wishlist items that will facilitate this new ‘adaptive’ data center in the coming year.
1. Artificial Intelligence and Machine Learning Capabilities
As mentioned, artificial intelligence and machine learning applications are the foundational innovations driving this generational wave of data center transformation. Artificial intelligence solutions create such powerful business efficiencies and opportunities that the consultancy PricewaterhouseCoopers predicts AI-driven innovation will fuel $15 trillion chunk of the global economy by 2030.
So what’s the problem? The sheer density of the data. Traditional computational models are repetitive, not dynamic, repeating the same calculations over and over on evolving data sets to achieve a desired outcome. Even today’s cloud-computing platforms cannot maximize on the value of AI+DL due to the limitations of traditionally rigid compute architectures. In these models, resources are locked into fixed configurations, to meet specific compute demands, at the point of purchase. While these kinds of inelastic configurations worked for cloud deployments that do not require dynamic data management, they fall far short of the demands of AI+DL deployments.
The reason is AI+ML operations are highly uneven. The data ingest phase of AI has a significantly different set of hardware requirements than does the inference phase. Data center admins are required to move entire workloads to systems built for each phase of the process, because resources cannot be dynamically shared across the data center. Clearly, this is as unsustainable as it is inefficient.
2. GPUs! GPUs! GPUs!
To begin to tackle these performance issues, GPUs are being deployed in volumes like never before. GPUs exponentially increase compute performance by enabling geometrical data calculations versus the transactional calculations of workaday CPUs, meaning GPUs can process exponentially more simultaneous data calculations. Indeed, demand for the technology is so high that the analyst firm Allied Market Research predicts that today’s roughly $19 billion GPU market will exceed $200 billion within the next decade.
GPU resources as they are currently deployed, however, remain limited by the same traditional data center architectures that are hindering AI+ML deployment in general. While these applications require dynamic resource management, GPUs are siloed, making them difficult to fully utilize and impossible to share across the network. While extraordinarily powerful, in order to effectively utilize these devices en masse, a more effective deployment model must be devised.
3. GPUs!!! … In Tandem with Other Accelerators!
But GPUs aren’t the only accelerator technologies out there being developed to answer the deployment limitations of the traditional data center. Ideally, NVMe storage, FPGA, storage class memory technologies, and other accelerator devices could be deployed in tandem with GPU to right-size an architecture through software and provide the exact amount of performance, in the exact configuration needed, then redeployed on demand once a task is completed. These resources end up underemployed or overtaxed resulting in, again, uneven systems that can’t meet data demand. Systems that enable IT administrators to deploy and share these disaggregated resources as required are sure to deliver holiday cheer.
4. Smart Networking
Traditional data center networking solutions are hardware-based and dumb. Emerging intelligent networking, by contrast, utilizes software to manage resources across the network with far greater efficiency. While intelligent network significantly aids the problems associated with static data center architectures, especially combined with emerging accelerator technologies. While smart networking delivers increased automation and efficiency through software, disaggregated resources still remain largely isolated and fixed due to the architecture’s larger configuration limitations.
5. Adaptive, Disaggregated, Composable Infrastructure
With all of these powerful hardware components being developed to support, augment, and advance AI+ML applications, the big gift on our customers’ wish lists this year is a better way to utilize these components together in the most efficient configuration possible, then release them for use by other applications. By strengthening the base system delivering resources for AI+ML operations, companies capable of delivering dynamic, disaggregated resource management previously siloed resources via software are providing the bones on which the muscles of contemporary AI+ML computing are tightening.
This is where composable disaggregated infrastructure solutions like those offered by Liqid come in. Composable infrastructure software solutions enable accdelerator resources to be shared beyond their fixed, physical limitations. GPU, FPGA, NVMe, storage class memory, intelligent networking are all deployed across PCI-Express (PCIe) fabric speaking natively without protocol latencies. This unprecedented breakthrough enables IT users can create on-demand bare metal ‘servers’ through software, without the need to crack open a box.
Accelerator resources, once difficult to share, can be pooled and deployed in previously impossible configurations to create perfectly balanced system for the workload at hand. Once the task is complete, the resources can be released for use by other applications. In addition, IT users can also share these resources across Ethernet, Infiniband, and other commercially available fabrics, delivering the most comprehensive solution for data resource management in AI+ML deployments.
As artificial intelligence redefines the way we understand and address everything from street traffic to agriculture development to genetic exploration, tech companies are stepping up to deliver a great holiday for IT users. By deploying software-defined composable disaggregated infrastructure solutions, IT users can most efficiently and dynamically manage their hardware footprint via software. Composable disaggregated infrastructure solutions deliver a dynamic new foundation for worldwide research and development driven by artificial intelligence, facilitating important breakthroughs regardless of the industry vertical. That’s perhaps the best gift any of us could receive as we prepare to celebrate another cycle round the sun.
To learn more about how you can transform your data centers and take advantage of Lquid’s Composable Infrastructure, consult with your Climb rep or check out our page here for more information!
If you’d like to join our mailing list, add your info here and we will subscribe you to hear more about our vendor promotions, events and more!