Amazon and Google Announce Resilient 'Multicloud' Networking Service Plus an Open API for Interoperability (reuters.com) 21
Their announcement calls it "more than a multicloud solution," saying it's "a step toward a more open cloud environment. The API specifications developed for this product are open for other providers and partners to adopt, as we aim to simplify global connectivity for everyone."
Amazon and Google are introducing "a jointly developed multicloud networking service," reports Reuters. "The initiative will enable customers to establish private, high-speed links between the two companies' computing platforms in minutes instead of weeks." The new service is being unveiled a little over a month after an Amazon Web Services outage on October 20 disrupted thousands of websites worldwide, knocking offline some of the internet's most popular apps, including Snapchat and Reddit. That outage will cost U.S. companies between $500 million and $650 million in losses, according to analytics firm Parametrix.
Google and Amazon are promising "high resiliency" through "quad-redundancy across physically redundant interconnect facilities and routers," with both Amazon and Google continuously watching for issues. (And they're using MACsec encryption between the Google Cloud and AWS edge routers, according to Sunday's announcement: As organizations increasingly adopt multicloud architectures, the need for interoperability between cloud service providers has never been greater. Historically, however, connecting these environments has been a challenge, forcing customers to take a complex "do-it-yourself" approach to managing global multi-layered networks at scale.... Previously, to connect cloud service providers, customers had to manually set up complex networking components including physical connections and equipment; this approach required lengthy lead times and coordinating with multiple internal and external teams. This could take weeks or even months. AWS had a vision for developing this capability as a unified specification that could be adopted by any cloud service provider, and collaborated with Google Cloud to bring it to market.
Now, this new solution reimagines multicloud connectivity by moving away from physical infrastructure management toward a managed, cloud-native experience.
Reuters points out that Salesforce "is among the early users of the new approach, Google Cloud said in a statement."
Amazon and Google are introducing "a jointly developed multicloud networking service," reports Reuters. "The initiative will enable customers to establish private, high-speed links between the two companies' computing platforms in minutes instead of weeks." The new service is being unveiled a little over a month after an Amazon Web Services outage on October 20 disrupted thousands of websites worldwide, knocking offline some of the internet's most popular apps, including Snapchat and Reddit. That outage will cost U.S. companies between $500 million and $650 million in losses, according to analytics firm Parametrix.
Google and Amazon are promising "high resiliency" through "quad-redundancy across physically redundant interconnect facilities and routers," with both Amazon and Google continuously watching for issues. (And they're using MACsec encryption between the Google Cloud and AWS edge routers, according to Sunday's announcement: As organizations increasingly adopt multicloud architectures, the need for interoperability between cloud service providers has never been greater. Historically, however, connecting these environments has been a challenge, forcing customers to take a complex "do-it-yourself" approach to managing global multi-layered networks at scale.... Previously, to connect cloud service providers, customers had to manually set up complex networking components including physical connections and equipment; this approach required lengthy lead times and coordinating with multiple internal and external teams. This could take weeks or even months. AWS had a vision for developing this capability as a unified specification that could be adopted by any cloud service provider, and collaborated with Google Cloud to bring it to market.
Now, this new solution reimagines multicloud connectivity by moving away from physical infrastructure management toward a managed, cloud-native experience.
Reuters points out that Salesforce "is among the early users of the new approach, Google Cloud said in a statement."
So now ... (Score:4, Informative)
Re: (Score:2)
Were you able to before? Obviously, "like everyone else" you'll be keeping it encrypted at rest with keys that are kept in an HSM. For the important "pet feeding habit data" you will have made an exception and actually bought your own HSMs, kept in your multiple highly geographically separated underground bunkers with limited on site compute and simply feed limited summary results back to the cloud. For less important "nuclear weapons test results" data you find some compromise where you can track which and
Even better... (Score:2)
Now The Great Ooops will delete all data at *two* cloud platforms at once.
If you thought multi-region was hard... (Score:3)
then you're going to hate multi-cloud.
Re:If you thought multi-region was hard... (Score:4, Insightful)
Yes indeed, and you're going to have to think about egress fees, as well as inter-region fees. There's no way you're getting out of this without making both providers rich.
That said, I see this as a good thing. You could previously do something similar with VPNs, but you'd have some risks which this takes away. It's a possible way to move workloads from one cloud to the other, whilst still keeping database replication or whatever between them - that gives you a warm standby in the other cloud, should yours be having a bad day (which both have had recently). For some shops it may be a way to fully migrate from one cloud to another, but I doubt many will do it this way.
Yes this is a royal pain in the wallet, but it feels like a gentle loosening of the shackles around cloud vendors. If they ever discount (or remove) egress over this link, then you really have something useful. Likewise if other cloud providers can offer similar services so it's not just about these two. It might sound unthinkable today, but then so was a cloud-to-cloud link just a few years ago.
Re: (Score:3)
Likewise if other cloud providers can offer similar services so it's not just about these two. It might sound unthinkable today, but then so was a cloud-to-cloud link just a few years ago.
Connectivity with Azure is announced for 2026.
Re: (Score:2)
Development hell (Score:2)
What about clustering synchronization costs (Score:2)
As everyone is quick to point out, uploading data to cloud costs nothing. However, downloading it back out costs potentially lots. I have sometimes been involved in setting up communications between multiple cloud providers and key factor has always been to set up some sort of bandwidth limits to make sure some process does not attempt to download the entire storage bucket from another cloud service.
So how is this going to be billed, if it's now officially supported? Especially if I have a working service o
A small amount of "multi" (Score:2)
"Two clouds" is not "multicloud". It's "bicloud", at most. And if it's rollin' just fine, "bicycloud".
Scotty Said It Best (Score:2)
"The more they overthink the plumbing, the easier it is to stop up the drain". (engineer Montgomery Scott, aka 'Scotty')
It's Going to Cost More (Score:2)
Of course it is. Every problem (including our ineptitude) is an opportunity.
A highly unstable multi highwire act :o (Score:3)
Yeah, great (Score:2)
Make the service unreliable so you buy two cloud services instead of one
Single point of failure (Score:2)
By connecting everything, they are definitely working towards creating a single point of failure of the entire Internet.
It's the end times... (Score:2)
The Googlzon has been birthed ...
Great (Score:2)
Least common denominator (Score:2)
Oh, you want portable cloud? You can use only these 10 features. You want more features? Oh, you've gotta use the native Google or AWS clouds for that.
Abstraction layers are the opposite of resilient (Score:2)
You want to experience fragile software? Try building it using an abstraction layer like Hibernate. These can be great tools, but when you use them, you have to stay firmly on the beaten path, using only the features that are fully supported by all underlying platforms. The problem is, nobody know what all the obscure differences are between the underlying platforms. A SQL Server developer might not suspect that MySQL can't do transactions unless the InnoDB engine is being used. But Hibernate might not seam