It’s easy to see why over half of all cloud customers use containers. Containerization imparts improved portability to packaged software deployment, and adds a vital aspect of immutability that streamlines the progression from testing to production.
The broad adoption of container technologies virtually eliminated the switching costs typically associated with migrating to new cloud infrastructure. This has afforded many enterprises the chance to relitigate their cloud spending by adopting hybrid or multicloud configurations in search of more cost-effective implementations.
But by sticking with conventional data center solutions, we’ve missed an opportunity to actualize a potentially greater cost optimization introduced by containers: decentralizing the cloud itself.
On Price Transparency
In the intervening years since containers arrived on the scene, multicloud has seen near-universal adoption among enterprises. It’s the inevitable response to costly, mass-market cloud solutions and the resulting crisis of overprovisioning—but it’s only half the answer.
Multicloud setups that rely on traditional data centers often expose customers to unforeseen impacts on their unit economics from new vectors. Despite the potential for net savings in the short term, many cloud-based organizations still overpay for industry-standard services such as Kubernetes orchestration by as much as 65%.
The reality is that no conventional cloud service can overcome the baked-in overhead of all the massive physical infrastructure behind it. Customers in the traditional public cloud quickly become beholden to fluctuations in price that are dictated by the ever-rising cost of maintaining and constructing multimillion-dollar data centers.
As multicloud became the norm, the Big Three cloud providers have all made half-hearted attempts at price transparency (or offered cost-optimization monitoring features for additional fees), but the operating cost of conventional data centers ultimately precludes them from providing the most affordable solutions.
End-of-the-month costs from hyperscale providers have even made some enterprises misty-eyed for running on-premises data centers. With every year, more and more cloud customers elect to purchase self-governed hardware and run bare metal in a hybrid configuration just to save on cloud spending.
If the public cloud could offer more affordable services, mulitcloud might never have been necessary. Now it rules the day. Companies are desperate to reduce costs so they can operate more efficiently—and if enterprises are feeling pain in their pocketbooks, it goes without saying that startups are absolutely struggling.
The price models of traditional cloud ecosystems inadvertently bar innovators from competing by placing an implicit constraint on what they can do. Leading hyperscale providers have consistently failed to produce affordable services for smaller companies running discrete applications or specialized workloads. But experimentation is the heart of computer science! Cloud customers shouldn’t have to take use cases off the table because of costly compute cycles, and researchers shouldn’t have to wait until next quarter to follow up on promising results.
Unless we experiment with alternative cloud infrastructure, tomorrow’s big thinkers may be stuck choosing between costly cloud and a forced return to server management.
Leveraging Latent Compute
Today’s cloud relies on expensive data centers, but that arrangement will prove untenable in the coming years, as artificial intelligence applications and billions of IoT devices induce data demand that conventional cloud providers simply won’t be able to satisfy.
Meanwhile, the world’s largest reserve of latent compute resources—and the hardware population that consistently boasts the most up-to-date processors, hardware drivers, and operating systems—lives in consumer homes around the globe.
The beauty of containers is that they can be run in nearly any compute environment and execute as intended. We have barely scratched the surface of technological possibility.
Volunteer computesharing networks such as the BOINC platform and SETI@home have shown us just what decentralized supercomputing clusters can accomplish at scale. Modern container technology now allows us to create a standardized virtual compute environment across worldwide networks that are unencumbered by expensive hardware upgrades and physical maintenance.
I believe we can replace the cloud with a global edge network capable of accomplishing more.
Why We Built Salad
At Salad, we’ve built a distributed cloud platform to nourish innovation. By leveraging underutilized compute resources from consumer-grade hardware, Salad invites everyday people to compete directly with traditional providers, earn meaningful rewards, and scale our decentralized infrastructure layer to Infinity.
It’s our goal to facilitate development with seamless, “to-go” container orchestration that comes with no surprises on the bill. Salad Container Engine is a lightweight, fully managed orchestration service that has been purpose-built to deploy stateless containers as affordably and as simply as possible.
With minimal configuration, our proprietary orchestration engine instantiates container groups, parses inputted technical requirements, and adaptively distributes workloads to optimal hardware cohorts from thousands of nodes on our global edge network. SaladCloud infrastructure is the perfect pairing for long-running data processes, rendering queues, and all sorts of HPC batch jobs.
We believe that a decentralized cloud is the ideal solution for Web3 startups, AI researchers, and other cash-strapped big thinkers. That’s why we’ve made it our mission to feed a hungry Internet.
The Salad team is hiring network and backend engineers. If you are interested in developing container-orchestration solutions on a decentralized infrastructure layer, please visit our careers page to browse available positions.