We’re in the midst of our own gold rush of sorts. Rather than fleeing to California in search of prosperity through extraordinary and precious assets, businesses are looking above to achieve their good fortunes. The cloud as it’s become known has empowered organizations with vast amounts of invaluable data to better serve the needs of customers while increasing the all-important bottom line.
Every second of every day, billions of people are seeking information through computers, mobile devices and the Internet of Things (IoT). This translates to massive amounts of unstructured data being collected and stored with the intent to leverage these variables to support decisions and actions that best satisfy a need or demand. We have come to know this as the big data boom.
Retrieving data involves a similar approach miners used way back when for discovery and exploration. The process involves mining, processing, refining and extracting unfathomable amounts of information with hopes of discovering valuable intelligence into customer behaviors and trends and all facets of business performance. As with the California Gold Rush, organizations across industries – from manufacturing to finance to healthcare – are in search of these golden nuggets of information, to achieve a significant advantage over the competition.
Demand for meaningful data
Our reliance on the internet has never been greater, and our accessibility has never been easier. According to a 2019 study by Pew Research, 81 percent of Americans are going online on a daily basis, with half utilizing the internet multiple times throughout the day. The barrage of IoT devices presents both extraordinary opportunities and difficult challenges.
As of 2017, an estimated 2.5 quintillion bytes of data was being generated and that number will only accelerate. By the year 2025, the International Data Corporation (IDC) believes IoT devices will generate more than 79 zettabytes (ZB) of data. That’s a whole lot of information to sift through, and therein lies the challenge for businesses. A majority of this data is unstructured, meaning it can come in any size, shape or form, making it extremely difficult to manage and analyze. So then, as the internet rapidly grows, how can organizations effectively manage data to gain valuable insights?
Cloud computing bridge
It’s true you can have too much of a good thing. In relation to data, consumers and businesses are generating so much of it, there’s a formidable strain being placed on the internet infrastructure. So then, as cloud technologies evolve, how can companies alleviate the data pressure and effectively collect and manage valuable insights? That’s where cloud computing plays a role in solving the data crunch.
Cloud computing, better known as “the cloud,” serves as a virtual bridge to seamlessly transport data from IoT applications to the desired destination. Variable expenses, tailored-built capacity, imminent deployment of applications and exceptional speed and agility are several key advantages and benefits as to why organizations favor this approach to collect, process and store tremendous amounts of data. The desired destination to house all of this data is in a public cloud that we refer to as a data center.
Data centers proved critical to the success of IoT
Data centers are the keystone of the big data boom. These technology havens are vital to organizations as they store, communicate and transport the information we generate each and every day. However, most data centers today are outdated infrastructures with inefficient systems, having failed to adapt to emerging technologies. This notion, as a result, limits the capacity to produce meaningful information.
As data centers transition to satisfy the growing demand, processes within its ecosystem will need to be modernized. Racks of servers and computing equipment require abundant power demands, making them susceptible to extreme heat. That’s where innovative cooling systems can play a role in ensuring these supercomputers continue to operate without overheating.
Unfortunately, traditional data center cooling systems waste a lot of energy, money and prove insufficient. One of the most common methods is air-based cooling, more specifically, cold aisle/hot aisle. Simply put, the configuration is to conserve energy and lower cooling costs by managing airflow through the separation of cold air from hot air.
The cold aisle/hot aisle process involves server racks being lined up in alternating rows with cold air intakes facing one way and hot air exhausts facing the other. In theory, this creates a convection system where server racks cool themselves, but this does not always work, causing more cold air to be pumped into the room. While this approach will bring the room temperature to an ideal setting, excess cooling capacity breeds hot spots in the data center. When all of the cooling units are running simultaneously, this creates a surplus of airflow, which causes hot spots to develop and form in racks and across equipment. Instead of cool air being pushed past servers and then staying by the servers, that air is pushed up and away from the servers.
And this brings us to liquid-based cooling. Viewed as the successor and far more efficient solution to air-based cooling, this technique allows specific system components to be cooled to a greater degree utilizing water. Liquid-based cooling is used to cool the hot side of the server rack to bring the temperature down. Since water conducts electricity, the liquid never touches the actual components themselves. Basins accommodate water, which is carried through hoses and into cooling tower pumps, then runs alongside the server. This helps keep the temperature of the components inside the server rack cool.
Now, there is a conflict with this approach. There will always be that risk of water leaking, especially at the point of connection between components, which spells disaster for server racks. Connections are ever so critical for continuous operations of liquid-based cooling. Companies demand a connection leveraging maximum reliability and efficiency coupled with durability and compact design. Most importantly, eliminating the chance of a drip, spill or leak. And that’s where Parker’s liquid cooling coupling solutions satisfy the need.
Featuring an advanced internal design and robust functionality, these couplings incorporate non-spill valving, meaning no spillage during connection and disconnection. The key is a flat-sealing valve design, preventing any type of fluid loss against sensitive electronics and all electrical connections. This ensures the highest level of compatibility with the broadest range of liquids and the application environment.
Parker’s liquid cooling coupling advantages
Higher flow rates
Low-pressure drop for maximum energy efficiency
Resistance to vibrations and rotation
No leakage when connected and disconnected due to state-of-the-art internal design
To learn more, visit Parker at SC19, November 17-22, 2019 in Denver, Colorado. View demos of the liquid cooling connection solutions at Booth #557.
Article contributed by Cameron Koller, market development manager, Quick Coupling Division, Parker Hannifin.