SaladCloud Blog

INSIDE SALAD

SaladCast Episode 13: Kyle Dodson on Containerized Workloads

Salad Technologies

*Welcome to SaladCast! In this podcast series, we introduce you to Salad Chefs from all corners of the Infinite Kitchen. We hope you’ll join us as we get to know members of our community, indie developers, and teammates from our very own Salad staff.*


In this episode: Engineering mastermind Kyle Dodson fields some heady questions about the history of containerization technology. If you’ve ever wondered how the Internet came to run on centralized services (or how Salad will change that), get ready for a crash course in all things cloud computing from one of the Kitchen’s most seasoned Chefs.

Watch the full episode at the SaladChefs YouTube channel.


Episode Highlights

Highlights content has been edited and slightly reordered for clarity.

Cloud computing has gone mainstream. How did we get here?

BOB: At Salad, we’ve always focused on understanding the human element of distributed computing—how to motivate people to share compute resources—in order to build our network in anticipation of a moment like this. Containerization and virtualization technologies seem to be spilling over from the arena of cloud development into consumer operating systems. How did we get here?

It’s become mainstream for even the largest enterprises to adopt various virtualization technologies. There are many bullet points you could list under their benefits, but one of the biggest drivers has been their potential to optimize resource utilization. Big companies always want to maximize their investments, especially what they spend on computing infrastructure.

Twenty years ago you’d go buy a computer, and then go back and forth with an enterprise server sales rep to plan out what you needed, resource-wise—not just for today, but for whatever lifetime you wanted out of that resource. “Our business is going to grow this way, we’re going to add more users, here’s how the software scales…” You had to understand all that up front, and then make a purchase, while guessing about the lifecycle five or six years from that moment. It was a black art. You got it wrong all the time, and it was really difficult to manage.

So the thinking changed. “What if we built large, optimized servers, and partitioned their resources for the applications running on them?”  Virtual machines allowed you to configure your systems so that, instead of having 20-40% of your resources sitting unused for the first few years of the hardware life cycle, you could put all of those resources to use for discrete purposes and maximize your long-term investments. That’s been a trend in software development for a long time.

What is the difference between virtual machines and containers?

Think of a virtual machine as a base operating system layer and all its associated overhead. Containers are really just another step within. When you’re running several virtual machines on one really large server, you’re paying all this overhead for each virtual machine that’s actually running on the underlying hardware. There’s an opportunity there. As more enterprises adopted virtual machines, people began to ask, “Could we optimize this even further?” Containerization technology eliminated some of that redundant overhead and consolidated it down so you could apply more of those low-level hardware resources to applications running on your server.

What other problems did containerization technology solve?

Once again, these innovations arose from the desire to eke more returns out of your investment. Many containerization technologies were built on solutions that had been around for a while, or borrowed features from existing technologies. Docker didn’t start with a complete feature set from beginning to end. It was the product of concurrent trends in the information technology and software engineering worlds. As developers and network administrators looked for better ways to manage the compute ecosystem, those individual features became the coherent software packages we know today.

By the time containers arrived, you weren’t buying one computer and loading up ten applications on it. Most companies had scaled to a point where they were buying several racks of computers and distributing hundreds of applications across them. Containers provided a neat way to orchestrate and distribute those software packages across all your hardware.

How did containers and virtual machines lead to cloud computing?

Suppose you have a network, the abstract concept of a disk, and a collection of hardware resources at your disposal. If you want to execute a software package in that ecosystem, you first have to figure out how to distribute that workload across a cluster of machines. And to ensure reliability at scale, you need enough touchpoints so you can access those resources at all times.

If you have some spare cycles sitting on the side, it’s no big deal if one machine goes down. You just need to transport those resources over to the failed system. There might be some downtime, but you could set resources aside to make sure the system comes back up. It sort of swung it back to that old problem of future-proofing your resource needs. You were still trying to make predictions before investing in hardware.

The cloud really emerged when large enterprises began to approach the same core requirements. They needed resources to run their baseline of live workloads, extra resources for backups, resources for emergency disaster scenarios, and resources to scale five years out. So along came these big cloud companies—notably Amazon, with AWS—who said, “We’re doing these things too, and we’ve got all this unused disaster capacity to share.” When you launch an elastic instance from AWS, everyone collectively pays for the uptime cost, effectively reducing your total investment.

You said that orchestration is the next step. Could you explain that?

BOB: To recap what we’ve discussed so far: containerization technology arose from a desire to maximize investments into network infrastructure. Developers adopted virtual machines to confront the fixed constraints of the hardware lifecycle. Now they can buy bigger pieces of hardware, run a few different services on them, and effectively share a redundant, lower-level OS layer through containers. You said that orchestration is the next step. Could you explain that?

To go back to a dated example: IT personnel used to sign into a console and set things up manually. If you needed three running machines, someone would actually dial into the client to make it happen. It wasn’t an ideal solution, so we decided to automate more of that process. That’s what we mean when we refer to “orchestration.”

A virtual machine is analogous to a container in that, on their own, neither provides much in terms of orchestrating how this thing is going to run. When you take something physical, with a lot of tangible constraints, and something abstract—like an application that needs to scale to “X” number of users—you need some way to connect those dots. You still need a higher-level power that manages those clustered resources.

That can mean different things. Do you need multiple copies of that software across so many boxes? Is that even possible for a particular software, given how it’s been developed? Even if it can be distributed, you’ve still got to determine the use case. Is this architecture principally for scaling, or are you adding redundant clusters to keep things up and running when something goes down? Large companies began to invest in workflow orchestrations to configure and drive their systems toward such predetermined use cases.

BOB: You’re referring to containizered images.

Right. To do that, you bake the code, its dependencies, and rules for how it should run into an executable image—which is sort of like a glorified ZIP file. Containerized images are immutable, meaning they should run the software exactly the same way, no matter where they end up—or at least provide a high likelihood of doing so. Software developers bake the code and its underlying operating system configuration into a file that can be deployed in any environment, whether it’s being shipped off to run on AWS servers or to a node on the Salad network.

On our network, that image may have access to different GPUs and CPUs, depending on what hardware our users have purchased. There are also differences between a home ISP and Amazon’s fiber interconnects. But if you’re concerned with the performance of that software package, the image will execute the same way on Salad as it will on something like AWS.

What exactly does a product like Docker do to enable orchestration?

Docker emerged as one of the biggest players in the space just a few years ago. They weren’t the first, but they attained early commercial success, and made a name for themselves among open-source developers with an orchestration protocol called Swarm. It went beyond the basics of pulling and executing the image—the code, its dependencies, and rules for how you wanted it to run—and allowed you to distribute hardware resources across a collection of containers that ran as a “cluster” in a larger software set.

Swarm gained a lot of steam during its early development, but it’s been somewhat overshadowed by Kubernetes, a product that was originally created internally at Google to manage and scale their higher-level systems. When Google decided to open-source a version of that software package, Kubernetes really took off in the IT Ops community for its flexibility—not just in terms of how it could do things, but also for its extensibility features. You could more or less write your own code and plug it in. That’s why most shops turn to Kubernetes for their orchestration needs. It’s become the gold standard.

How do enterprises use containerized orchestration technologies?

With any orchestration system, you’ve got a whole collection of hardware resources to distribute. And as a business, you want to use as much as you can—because idle resources mean waste. But you also have to consider the meta, more abstract side of it. To run a software package across a system, you need to consider its requirements and the eventual scale you hope to achieve.

One of the terms people like to use in software development is “desire.” Desire is always different from reality. Your software could be crashing, your hardware assets could go down—or whatever it may be. Orchestration layers take your requirements and automate whatever they can to approximate that desired state. Packages like Kubernetes offer out-of-the-box orchestration.

You can have a dozen instances spread across your servers, or even design a system that is geodistributed across data centers, depending on the specifics of the Kubernetes engine you’re running within the cluster. Orchestration also allows you to configure rules to handle self-healing in case critical parts of your system go down. That way you’re still up and running when a comet strikes one of your data centers.

Are there any drawbacks to orchestration frameworks like Kubernetes?

Google developed Kubernetes for a very specific purpose. They were managing higher-scale systems running on their own data centers. It was built with the baked-in assumption that you’ll be running a clustered set of nodes in a trusted environment. As a software package, it isn’t really suited to applications where you have lots of nodes distributed all around the world.

Kubernetes was still in scope for Google’s geodistributed network, because they retained full control of their data centers. A company like Google can afford to lay their own fiber between data centers and establish a low-latency global distribution. It’s far too expensive for most other companies to go out and build on their own. So enterprises can choose to rent out those types of resources from the likes of Google—or be forced to forego them.

It’s an attractive idea to hire infrastructure that’s optimized with high-speed interconnects. But the quality of network connections is only part of it. Trust is another huge topic in cloud development. If you said to a large enterprise fifteen years ago, “Hey, move all your servers into Amazon’s building. They’ll take care of it for you,” they would have said, “Heck no! There’s no way you could convince somebody to do that.”

What exactly is “zero trust” or “trustless” computing?

The idea there is as straightforward as it sounds. If you want to run software, you can’t trust the environment in which it’s running by default. How do you know that the operator of those resources is acting with your best interest in mind?

You see elements of this in other software packages all the time—even something as simple as playing a game on Xbox Live. Many Xbox consoles come together on the network, but one host runs the server. There are times when that server can be hacked to cause malicious outcomes. That’s why the game developers deploy anti-cheat software packages. Collectively, those different Xbox consoles run an algorithm to detect whether there’s consensus between what each machine says has happened. If there isn’t, it could mean that someone is trying to screw you over with a spoof.

That’s one solution in a zero trust computing environment. You can add extra layers, check the activity on your network, and verify the trustworthiness of any given node. But it doesn’t have to be a technical implementation through software. At Salad, we’re developing methods of orchestration to overcome trustless environments through other ways. We want to work with all of our Salad Chefs—the users who share their computers’ underutilized resources—to leverage resources that would represent waste in a large enterprise data center environment.

With all these trends coalescing, what do you see as the biggest opportunity?

Look at globalization. There’s nothing in the digital world that stays confined to country boundaries anymore. Things traverse the globe at the speed of light. We have computers all over the world that aren’t being utilized at 100% capacity. In a global context, that’s waste. It’s waste that we’ve put all over the place.

We’ve arrived at the same opportunity we saw in enterprise settings a few decades ago, where you had to plan your resource utilization for the future. To ensure that your infrastructure will continue to fit your needs in the future, you always have to over-buy, at least to some extent—and that over-purchasing leads to some amount of waste in the immediate term.

Salad is trying to tap into those resources. How can we put them to use to benefit the world, while reducing that waste? How do you solve the problem of establishing trust among those distributed nodes? There will always be actors who come with the intention to scam, cause problems, or be otherwise malicious. The orchestration layer we’re building will connect all of those underutilized nodes, network them, and run those software packages at the scale enterprises need—and simultaneously reward the person who owns those underutilized resources for eliminating that waste.

How might fully homomorphic encryption change our understanding of trustless computing?

BOB: IBM’s research team recently published some of their work with a major company in the healthcare space, where security and data privacy are paramount. They’ve put together a framework for supporting artificial intelligence workloads that uses fully homomorphic encryption (FHE) for applications on federated infrastructures. The idea is to protect the workload—not just the machine learning algorithm, but also the data set. How might that change our understanding of trustless computing?

Fully homomorphic encryption is an interesting topic. To put it in simpler terms, it’s the idea that you can run a software package without actually knowing what the algorithm itself is doing, or have any notions about the underlying data being processed. There’s the encryption aspect. The software is locked away in a state where whoever owns the hardware resource running the code can’t see what’s occurring.

BOB: That reminds me of a comment you once made about highly secure computing environments. You said that if someone has access to your hardware, you should still assume they can read everything on there. Does FHE alter what we’ve traditionally understood about security and privacy?

When you ask an engineer, the answer is always, “It depends.” Suppose, theoretically, that you have full control over the software supply chain. You know everything that’s running on the machine. Let’s say you’ve even got a special setup at the firmware level, where the system can only run and boot into a signed build of Windows. Maybe that Windows OS can only run signed software from a particular vendor. You could even blacklist any software released by other vendors. A software package like that sounds pretty secure.

But the ultimate workaround is that all hardware is physical. You can find some really crazy hacking stories when you dig into these cybersecurity reports. As extreme as it sounds, somebody could splice in wires onto your motherboard and read off the voltage lines. People have been able to read the vibrations of a fan across the room and extract data based on its RPMs. It’s a little far-fetched, but it’s possible.

I should have known better than to ask an engineer.

If you’re thinking in that type of a mindset, physical access is the ultimate backdoor, no matter how good you may feel about the software configuration. Looking at something like FHE, it sounds a little crazy. Why does it all have to be encrypted? Why invest in something like that? It connects back to your question about trustless computing environments; you’re trying to control for that malicious bad actor.

The idea that you could deploy software to a computer, with safeguards in place to encrypt both the algorithm and the data, and do meaningful work whose results can only be decrypted by the software vendor? That’s a novel value proposition that could solve some of the core problems of trustless computing.

We can now distribute software to a machine we know nothing about, run it, utilize those resources, and get those results back. It’s a very interesting area of experimentation. As you noted, companies aren’t merely approaching this from a theoretical standpoint; they’re already defining workable solutions. Maybe there are software packages that go mainstream as a result.

In Windows 11, Microsoft has added extended support for containerization through features like WSL2. How will that change software development?

Microsoft is simultaneously focused on competing as a cloud service provider—with Azure and all its hardware and software offerings—and remaining at the forefront as an industry standard. They’ve gone through a few iterations to incorporate containerization in Windows, but they ultimately observed the trends in data centers and made changes that should convince plenty of developers to use Microsoft products and services.

These kinds of features aren’t just beneficial in server land. Microsoft created a first-class virtualization layer called the host compute service (HCS) that not only streamlines access to hardware resources, but permits you to build on top of it quite easily. Windows 11 enables a seamless development experience, so you can ship straight from production to consumer environments.

You mentioned WSL2—that’s the “Windows Subsystem for Linux.” If you turn that on, you’re running a minimal version of Linux right alongside the Windows kernel. You can now go to the Windows app page and spin up an Ubuntu OS in a terminal within minutes. Now that you can pull in Linux-native applications and source code, you’ve got one less reason to have a separate computer to develop for Linux, or to buy a software package for a virtual machine

How does the HCS virtualization layer help us access underutilized resources?

If you’re working inside a system that’s built on the HCS layer, you’ll see a network card, but it’s not the actual underlying network card, or the disk, or the RAM. You’re able to interface with these lower-level resources through a translation layer. They’ve created a standard API that always looks the same from the inside, so that hardware vendors can translate that down to the real hardware. This is known as paravirtualization.

Now that Windows 11 has launched with extended support for containerization, we can access even more of those underlying hardware resources via the HCS layer. Like we touched on earlier, a lot of developers package their code and OS configurations into images that can be executed from anywhere. Thanks to that common and consistent abstraction layer, you can unlock the features of all the computer’s lower-level devices to get better performance.

As more containerization technologies arrive on consumer hardware, how will Salad leverage them to service different workloads?

Not all hardware is created equal. While there’s a lot of standard baselines, every piece of hardware comes with its own special features that are optimized for different use cases, and which only certain software knows how to use. When you go through a virtualization layer, you usually end up catering to the lowest common denominator—meaning you can only do the things that every part of the system can support.

But if you had software that better understands all those feature sets, and access to all kinds of different hardware, you could turn that into a real advantage. That’s why this is so exciting for us here at Salad. Many AI/ML frameworks need access to particular feature sets, or require specific performance from the GPU to solve a problem. You can’t just run those workloads through a virtualization layer and assume they’ll still work. We can actually distribute and run containerized workloads on individual target nodes, so there’s all sorts of benefits there.

What does Windows 11 allow us to do that couldn’t have been done before?

You can now run Cuda, DirectML (a machine learning algorithm developed by Microsoft), and various other application types inside of a Linux Docker container, running on a Linux kernel, as part of the HCS layer, all running inside your Windows box. The kernel receives that information from a driver—a software package—inside the container. If your software knows how to translate that information to the actual driver on Windows, it can make full utilization of the NVIDIA card there.

Until this point, there were many ways to approximate that behavior on the CPU side, but not as many ways to do it with GPUs. Now you can run other 3D-based applications inside a container just as if it was running on Windows. We’re in a perfect position to tackle cutting-edge workloads.

How do you reconcile optimized performance with data security?

Containerization offers a few obvious benefits. You’ve got the portability to move software from one machine to another using images. Then you’ve got the cross-platform compatibility to do things like running Linux on Windows at full hardware acceleration. There’s also added security via isolation. On these virtualization layers, you don’t have unfettered access to the computer. Windows enforces its own guardrails to protect the end-user and the contents of their machine.

Those safety measures increase security, but they do so at the expense of performance. Every time you add another software layer, there’s a slight overhead cost. A cryptocurrency miner running in a container could see a 20% reduction in hashrate, compared to your actual throughput.

The first version of these kinds of innovations is always rough around the edges, but there’s a bright future ahead. The nice thing is, there’s no end to the performance optimizations you can find. If those other benefits of portability, compatibility, and security are important enough to capitalize on, you could opt into running one of Salad’s containerized workloads. One day soon, our users will choose whether to run packages on the host for max returns, or in a container for added security.

Of all the things we’ve got in development, what are you most excited about?

With all of these trends coming together, it’s exciting just to see the trajectory that we’ve laid out. We’re focused on gathering supply to build our infrastructure. I can’t wait to see who comes along with a neat application that can take advantage of what Salad and all our Chefs provide.

As we have more conversations with potential partners, it’s clear that there will be a demand for Salad’s network, whether it’s through what we’re building today or something years down the road.

I’ll be the first to say it: even we don’t know all the use cases! We’ve already encountered partners with use cases I never knew existed, and which I could never have dreamed up. They see our decentralized network as a valuable alternative to centralized providers. Blockchain is just scratching the surface of everything Web 3.0 might bring.

More SaladCast Coming Soon

Liked this episode? Stay tuned for a continuing series of interviews featuring Bob and other faces from Kitchen HQ in the weeks ahead. These upcoming episodes promise an open-source look at the Salad recipe. In the meantime, browse our full SaladCast episode catalog.

Have questions about enterprise pricing for SaladCloud?

Book a 15 min call with our team.

Related Blog Posts

Molecular Simulation GROMACS Benchmark on SaladCloud

Molecular Simulation: GROMACS Benchmark on 30 GPUs on SaladCloud, 90+% Cost Savings

Benchmarking GROMACS for Molecular Simulation on consumer GPUs In this deep dive, we will benchmark GROMACS on SaladCloud, analyzing simulation speed and cost-effectiveness across a spectrum of molecular systems—small, medium,...
Read More
Salad drops cloud GPU pricing to become the lowest-priced cloud (again)

Salad drops GPU pricing to become the lowest priced cloud (again)

Salad chops Cloud GPU pricing We are excited to announce that we have reduced the prices on our entire fleet of GPUs, making Salad the lowest priced cloud in the...
Read More
OpenMM-benchmark-on-GPUs-Salad-Blog-cover

Molecular Simulation: OpenMM Benchmark on 25 consumer GPUs, 95% less cost

Benchmarking OpenMM for Molecular Simulation on consumer GPUs OpenMM is one of the most popular toolkits for molecular dynamics simulations, renowned for its high-performance, flexibility and extensibility. It enables users...
Read More

Don’t miss anything!

Subscribe To SaladCloud Newsletter & Stay Updated.