Before GigaIO came along, the five basic building blocks of massive computer servers could not ‘speak’ to other building blocks in other servers in the same rack or computing cluster. Now they can. Thanks to the innovative work – and technological brilliance – of Alan Benjamin and Joey Maitra.
By Lee Barnathan, California Business Journal
Alan Benjamin couldn’t believe it, either. How was it possible that communication between servers and the basic building blocks couldn’t easily talk to each other?
They couldn’t, that is, until now.
Thanks to technology developed by Benjamin’s company, GigaIO, the basic building blocks of servers in a rack or cluster can now communicate cleanly, clearly and efficiently, saving companies valuable time — and an enormous amount of money.
Welcome to the future of the new data center.
The technology, created by GigaIO founder and chief architect Joey Maitra, is in testing phase now, and Benjamin expects it to be ready for shipping to customers early next year.
“When I got involved and understood Joey’s technical approach, I told myself the same thing: ‘This makes sense. This is how it should have been done,’” Benjamin says. “It turns out that while it looks simple, making it all work is a lot harder.”
And ultimately it was accomplished just like that.
The reality is that companies have tried and failed because no one could figure out how to make the switch that routes all the information in a computer network, connect to all the server parts and talk to each other in the standard language, PCI Express.
Peripheral Component Interconnect (PCI) Express has been around since 2004. When a computer boots up, PCI Express determines which devices are plugged into the motherboard. It then identifies the links between the devices, creating a map of where traffic will go and negotiating the width of each link.
When a company has racks of servers – and there usually are between six and 25 servers per rack – the amount of information that can travel between servers can only go so fast because of PCI Express’ chief shortcoming: it hasn’t yet evolved to be able to communicate with multiple hosts. Instead communication is translated from PCIe into some other networking protocol.
Maitra’s solution was creating a PCIe switch that enables multiple root complexes to exist on a given network. A root complex connects all the building blocks inside the server together – processor, memory, storage, acceleration and networking, and generates transaction requests on behalf of the processor.
In the past, these root complexes “fought” over which link the traffic will flow. Now, each server’s root complex “owns” its own link and sees all the other servers’ links in the cluster without fighting.
“What we discovered is extreme connectivity with lightning-fast, seamless, impeccable performance,” Benjamin says.
Maitra is a certified genius in his space. Having held executive positions at Magma, Patriot Scientific and Metacomp, he was instrumental in the development of a Unified System Area Network with PCI Express as the fabric and is the inventor of the IP associated with it.
“He defined the software, hardware and the system architecture of the prototype switch implementation and was responsible for the design implementation,” Benjamin says.
“Remarkable stuff,” he adds.
Companies and industries with massive computer systems that run millions of programs and jobs at once will be benefitting dramatically from GigaIO’s technology. These include artificial intelligence, high-speed trading, genome analysis, bioscience research, movie rendering, oil and gas exploration — and moving data from a centralized cloud to logical extremes — or edges — of a network.
“When you communicate from server number one to server number two, the native language of the chips in the server is PCI Express, and the architecture inside those servers is PCI Express,” Benjamin says. “We have invented technology that allows us to take PCI Express and move it within the network cluster.”
Benjamin, who was previously COO of Pulse Electronics and CEO of Excelsus Technologies, outlines several practical benefits of GigaIO’s unique technology:
- Vastly improved performance. If it takes data 20,000 nanoseconds (or one billionth of a second) to move through the system, GigaIO’s technology takes less than 200 nanoseconds — 100 times faster. And faster is better, of course, especially when something like AI might require nine million calculations, Benjamin says. An example would be a Big Pharma company conducting research and development on a new drug that would aid an illness or disease. Since FDA approval is time-consuming and expensive, the company would want to ensure it is worth the effort. But the seven terabytes (or seven trillion bytes) of data it would collect weekly would be too much for humans to sift through before the next week’s seven terabytes of data come in. “Our technology will be able to keep up with it,” Benjamin says. “That’s what makes it so groundbreaking and unique.”
- Better – and more — flexibility. A server contains five basic building blocks: a central processing unit, memory, storage, acceleration and a communication card. These elements can either be “locked together” or treated separately as different subsystems via a process called disaggregation. This new technology allows an advanced scale computing system to dramatically increase its use of resources by taking better advantage of disaggregation.
- Reduced expenses and increased productivity. Benjamin says one early customer built a new system using GigaIO technology for about $60,000 that is giving them 25 percent improved performance compared with their existing system costing about $140,000.
Benjamin envisions more companies and industries requiring GigaIO’s technology and becoming less resistant to having it — the resistance stemming from the fear of something new.
“The pace of change the next 10 years will be dramatic,” he says. “No one wants to spend big money because of fear of obsolescence. We’re trying to alleviate it.”
And that Big Pharma company that would use the technology for seven terabytes of data? Benjamin says that by next year, the technology would be able to handle 15 terabytes.
“We’re giving them a platform that’s simply more adjustable and more flexible,” he says.
Copyright © 2018 California Business Journal. All Rights Reserved.