Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Cloud The Internet

Is the Cloud Making Internet Services More Fragile? (nbcnews.com) 119

In the past three weeks, two major outages at Amazon's cloud computing service "led to widespread disruptions at other online services," reports NBC News. And they also cite June's "service configuration" issue at cloud CDN Fastly, which took countless sites offline including PayPal, Reddit and GitHub, and an AWS outage in November of 2020 which affected clients like Apple.

"The drumbeat of issues underscores that the internet, despite all it's capable of, is sometimes fragile...." The latest disruption occurred Wednesday, when customers of DoorDash, Hulu and other websites complained that they couldn't connect. The problems were traced to Amazon Web Services, or AWS, the most widely used cloud services company, which reported that outages in two of its 26 geographic regions were affecting services nationwide. A similar disruption took place Dec. 7, crippling video streams, halting internet-connected robot vacuum cleaners and even shutting down pet food dispensers in a series of reminders of how much life has moved online, especially during the coronavirus pandemic. AWS published an unusually detailed description of what went wrong, along with an apology.

The incidents helped to explode the illusion, reinforced by decades of steadily improving internet speed and reliability, that everyday consumers can rely on online services to be available without fail.... Experts in computer science and security said the interruptions don't really call into question the fundamental design of the internet, one of the founding ideas of which was that a distributed system can mostly continue functioning even if one piece goes down. But they said the problems are rooted in the uneven development of the internet, because certain data centers are more important than others; cloud businesses run by Amazon, Google and Microsoft concentrate more power; and corporate customers of cloud services don't always want to pay extra for backup systems and staff members.

Sean O'Brien, a lecturer in cybersecurity at Yale Law School, said the outages call into question the wisdom of relying so much on big data centers. " 'The cloud' has never been sustainable and is merely a euphemism for concentrated network resources controlled by a centralized entity," he said, adding that alternatives like peer-to-peer technology and edge computing may gain favor. He wrote after last week's outage that the big cloud providers amounted to a "feudal" system.

"There are many points of failure whose unavailability or suboptimal operation would affect the entire global experience of the internet," said Vahid Behzadan, an assistant professor of computer science at the University of New Haven... "The fact that we've had repeated outages in a short period of time is a cause for alarm," Behzadan said, noting that U.S. businesses have staked a lot on the assumption that cloud services are resilient.

NBC cites reports that some companies are now taking a look at using multicloud solutions. And these outages may encourage businesses to finally take the plunge, adds the CS professor from New Haven.

"The internet will not die any time soon. But whatever won't kill the internet makes it stronger."
This discussion has been archived. No new comments can be posted.

Is the Cloud Making Internet Services More Fragile?

Comments Filter:
  • by klipclop ( 6724090 ) on Sunday December 19, 2021 @01:37AM (#62096185)
    Running too big of a complex infrastructure... This are fine when it's businesse as usual and things are running smoothly. But it's not so fun when you are doing routine work or maintenences and things freak out unexpectedly and create a mass outage. Not much that can be done when you are over worked with a lean team.
    • by Z00L00K ( 682162 ) on Sunday December 19, 2021 @02:16AM (#62096233) Homepage Journal

      "Don't have all your eggs in the same basket."
      and
      "Domino effect"

      Always consider these when designing your infrastructure.

      • I always thought if I lined my eggs up like dominoes, the double negative cancels out the effect...

        • by Z00L00K ( 682162 )

          Sorry, you get an exponential effect instead.

          Number of services to the power of number of bad decisions.

          But if you have a service like SAP then that counts as 10 services because it's 10 times worse than running your own written in Cobol.

          • Which leads to the idiom, good architecture is bad job security.

            • by Z00L00K ( 682162 )

              I work on the aspect that the reward for a good job is another job. It has kept me busy since '87.

          • Cobol's for lightweights. My keyboard has 2 keys, 0 & 1.
            • Cheater, using KEYCAPS! I rip the keycaps off and drool onto the contacts, entering my program as a raw binary file. I do use the numpad primarily because of the proximity of the two keys, but sometimes when the code is really flowing I have to switch around or risk shorting out both to each other. It can be abused tho, that's what I do when I need random constants.

      • And I think that was the entire selling point of the cloud, that everything is magically distributed and redundant since it's not like you have a server in a closet somewhere.

        But instead Amazon has a server in a closet somewhere and they'll occasionally knock it offline and bring down half of the internet down with it.

    • What is the basis of your comment? Do you have some insider info that shows that the various Cloud outages this year have been the result of understaffing?

      • What is the basis of your comment? Do you have some insider info that shows that the various Cloud outages this year have been the result of understaffing?

        Well, one of the basic premises of the cloud was that you could eliminate your own employees for what the cloud service would supply.

        So in our zeal, we created a system that can take hundreds (or more) businesses offline.

        Hard to say what adequate staffing would look like for keeping those businesses up and running, making money.

        Regardless, the fix for cloud problems is more cloud. 8^/

        • So in our zeal, we created a system that can take hundreds (or more) businesses offline.

          But why do we care about hundreds of businesses? As a user I frankly don't give a fuck about anything other than the one website I'm trying to reach. As a company I sure don't give a fuck about any business other than my own.

          Hard to say what adequate staffing would look like for keeping those businesses up and running, making money.

          When you run a business based on a single core competence, staffing challenges are significantly reduced, and back to my comment what companies do is completely irrelevant, your comment postulated that resource issues were the reason for cloud outages and I want you to back that comment

          • So in our zeal, we created a system that can take hundreds (or more) businesses offline.

            But why do we care about hundreds of businesses? As a user I frankly don't give a fuck about anything other than the one website I'm trying to reach. As a company I sure don't give a fuck about any business other than my own.

            I demand any business I am buying anything online to be online. If they are not, I don't buy from them - I'm not certain I get your point. If my business is in the cloud, I want it to work all the time. And if there are other businesses using the same cloud provider, they go down the drain with me.

            When you run a business based on a single core competence, staffing challenges are significantly reduced, and back to my comment what companies do is completely irrelevant, your comment postulated that resource issues were the reason for cloud outages and I want you to back that comment up with some relevant source, not just attempted misdirection at other companies not related to the issue at hand.

            Explain how I am engaging in misdirection, other than it seems to trigger you bigly. OP was making the resource assumptions, not me. Given the great resignation event, https://mytechdecisions.com/ne... [mytechdecisions.com] there see

            • I demand any business I am buying anything online to be online.

              My point is that the cloud isn't "fragile" simply because it collects all businesses under one umbrella. They all go down, but you're not interested in all of them, you're interested in the one you're doing business with. The only thing relevant to you is: Was the business more likely to be online when they did everything them else, or do they have better uptime with cloud services.

              Slashdot talks about the unreliability of the cloud without any evidence that there was ever a better alternative. The thing is

              • My point is that the cloud isn't "fragile" simply because it collects all businesses under one umbrella. They all go down, but you're not interested in all of them, you're interested in the one you're doing business with.

                With the old system when one site got attacked it didn't affect any other sites. With the new system they attack the cloud provider when they want to attack a site, and it affects many sites.

                So while your statement is technically correct, it's meaningless since as a user I'm not only not interested in other sites going down, I'm also not interested in why the site I want to use is down. It doesn't matter to me whether the people running the site fucked up, or the cloud hosting service fucked up. Either way,

          • As the very old, pre-internet saying goes, "No man is an island." We all depend on thousands of companies every day.
    • Running too big of a complex infrastructure... This are fine when it's businesse as usual and things are running smoothly. But it's not so fun when you are doing routine work or maintenences and things freak out unexpectedly and create a mass outage. Not much that can be done when you are over worked with a lean team.

      That's exactly the whole premise of the "cloud". You don't have to provide your own storage and the people needed, someone else will. They'll look at it as just providing storage, and you end up with millions trying to use a few providers as we do our standard "Big boys" get rid of the little players.

      And all the vaunted advantages of the cloud? Not worth much when you can't get your data, or can't run your business.

      This was all obvious way back in the day when the breathless pronouncements of the Nirva

    • by Junta ( 36770 )

      I would say we have made the infrastructure needlessly complicated in many ways. I routinely deal with helping people and they have abstractions piled upon abstractions. Network paths that go through *multiple* NAT transitions even if they never leave a datacenter (toward the 'simplicity' of letting arbitrarily many instances of an application hardcode itself as '192.168.0.1' and letting some middleware NAT it into something actually useful, for example).

      All this makes for a lot of room for difficult to de

  • by Hey_Jude_Jesus ( 3442653 ) on Sunday December 19, 2021 @01:39AM (#62096187)
    Having the Internet concentrated into a few companies hands means that a bug or bad programming or DDOS will stifle the Internet in one day. IMHO. YMMV.
    • a bug or bad programming or DDOS Or a government. An episode of Black Mirror dealt with this. It involved the Prime Minister of England fucking a pig, but there's a deeper message.

      • The deeper message is to tell newcomers to skip the first episode and continue with the rest. So many recommendees get so turned off by the first episode, they abandon the show and miss out on all the fun.
    • And yet people will argue that using programming frameworks, and indeed that having fewer of them is better, even though this has the same problem. It doesn't matter your rationale for doing it.

  • by Klaxton ( 609696 ) on Sunday December 19, 2021 @01:55AM (#62096207)
    I guess multicloud means relying on not just AWS but also Azure and Google? Then you would have your nuts in a vise 3 ways instead of just 1.
    • The more complex a system is, the more likely something goes wrong. They just keep layering and layering till one day it collapses like a house of cards. Maybe if the goal wasnt driven by the greed, to service 6 Billion people all at the same time, they wouldnt be in this mess.
      • by Sique ( 173459 )
        Complexity is not necessarily a sign for greed. Even some very convenient things like Single Sign On have much more complexity than keeping an .access-html file in your wwwroot. On the other hand, .access-html is already hell of a support nightmare, if you are maintaining just several hundred users.
      • The more complex a system is, the more likely something goes wrong. They just keep layering and layering till one day it collapses like a house of cards.

        No, that would only be if the system depends on all of the providers. If you have multiple providers that act as backups in case something happens to a provider, your system depends on only one of the providers working, so you only get a failure if something goes wrong at all of the providers at the same time.

        For the electricians in the room, it's the difference between a series circuit and a parallel circuit.

        • The more complex a system is, the more likely something goes wrong. They just keep layering and layering till one day it collapses like a house of cards.

          No, that would only be if the system depends on all of the providers. If you have multiple providers that act as backups in case something happens to a provider, your system depends on only one of the providers working, so you only get a failure if something goes wrong at all of the providers at the same time.

          Sounds like an excellent argument for a local backup of your cloud. 8^)

    • by jon3k ( 691256 )
      If you can operate your infrastructure on all three platforms you can shift workloads to whichever is actually functioning or whichever is the least expensive. The most obvious way is just running your workloads containerized on each platforms kubernetes service. The problem with it is that it adds a layer of abstraction which adds complexity and means that you are probably using the lowest-common-denominator of each provider's service.
      • Looking at these issues, I wonder just how well that would scale today. It isn't as though Netflix or Disney could just switch half their workload around dynamically and not cause major problems that way, especially if it is triggered by an outage.

        Something tells me we need a bit of a paradigm shift to make sure problems can be contained better to facilitate geographical performance and improve recovery modes.

        • by jon3k ( 691256 )
          Oh definitely, and I'm not talking in terms of zero downtime or let's say zero impact. But theoretically, if Netflix (et al) spread their workload over several regions and several providers, and they had an outage with one provider (or provider region) you might take a performance hit but you could VERY quickly spin up replacement containers in the environments that are still functional. So maybe you have reduced performance for 5 or 10 minutes while you bring a few thousand (probably more like hundreds o
    • I guess multicloud means relying on not just AWS but also Azure and Google? Then you would have your nuts in a vise 3 ways instead of just 1.

      One of my favorite pro cloud statements was made to me by a true believer. When I questioned him on what would happen if the cloud went down and you couldn't gt any of your data out, he (wihtout irony) told me - "Well, you have to duplicate all that stuff locally, you know!"

  • this is why (Score:5, Informative)

    by buss_error ( 142273 ) on Sunday December 19, 2021 @02:06AM (#62096225) Homepage Journal

    "No news is not good news, it means you're ignorant" - wise old DevOp guy
    Monitor: Second to second. Computers were made for repetitive boring tasks. Monitor the alarm systems as well.
    Automate: Are you doing the same thing? You're not automating enough.
    Watchers: As many as sensible. No one likes On Call. Plan for the no answer. It will happen even to the best.
    Idiots: Learn to love 'em. They'll break your stuff in new and breathtaking ways. Save killing them for later. After you fix it.
    Test: Frequently. Synthetic tests are your friend. Monitor them.
    Redundancy: As much as you have budget for. Document when its not enough. You'll need it later when the PHBs call.
    Monoculture: To be avoided where possible. Accept when it's not possible.
    Document: You stopped? Oh, bleep!
    Logs: Read 'em. An attacker can fail a million times. You get one chance to get it right.
    Failure: You. Will. Fail. If you never fail, you're not doing anything.

    • This post needs a +200:informative.

      • I'd even go as far as saying it needs a +255: Informative, so we don't waste part of the unsigned byte used for post scores.

    • I think you're missing a BOFH in there somewhere.

    • by smillie ( 30605 )
      The PBHs were saving money by going to the cloud. I could have saved money in the server room too. All I would have to do is skip the backups and redundant servers like the cloud servers were doing. That would have cut my server room costs in half. Considering how often we lost a server no one would have noticed for a year or so.
  • by backslashdot ( 95548 ) on Sunday December 19, 2021 @02:28AM (#62096253)

    I would worry more about ignorant fools spouting fear of 5G and technology in general.

  • No (Score:4, Informative)

    by Luthair ( 847766 ) on Sunday December 19, 2021 @02:31AM (#62096259)
    It just means that everyone is affected at once. Instead of a single site being offline.
  • I usually use no script to see which webpages are being used and only allow the ones I want. On some webpages, there are 30+ different other webpages that are used to help load resources on the page. Granted many of those are for tracking purposes, but still it only takes one of those sites to go down and the page will not function as intended. What happens if we ever get to a point where a small section of the internet goes down for a week?

  • by Ritz_Just_Ritz ( 883997 ) on Sunday December 19, 2021 @03:16AM (#62096303)

    A number of companies have been doing that for years. Even better when you can configure your "on prem" infrastructure to behave as your own private cloud. Abstract the lot from your stakeholders/customers and move workloads around based on need/cost whenever you'd like. It's not even particularly hard, but it can be expensive, especially if you're saddled with a heaping helping of legacy technology that doesn't lend itself scaling out.

    Best,

    • by tlhIngan ( 30335 )

      A number of companies have been doing that for years. Even better when you can configure your "on prem" infrastructure to behave as your own private cloud. Abstract the lot from your stakeholders/customers and move workloads around based on need/cost whenever you'd like. It's not even particularly hard, but it can be expensive, especially if you're saddled with a heaping helping of legacy technology that doesn't lend itself scaling out.

      There are a few companies doing it - Apple uses AWS, Azure and Google Cl

  • Heck yeah!!!

    Everyone should have the entire content of the internet downloaded to their harddrive
    • A distributed browsing system could achieve this but oddly enough the major attempts at that are all married to block chain.

      However the solution is a bit simpler. in that users only download what they use. A number of webpages are like this for me. The largest issue is distributing updates to content but if the content is rarely updated, not a big deal and the system design can even be utilized for archival purposes. Say the major authority for a site is experiencing an outage, then you get an archived vers

    • Well when my hard drive crashes, I wrote to the NSA. I politely asked for their copy of my hard drive so I could reimage a new one. Those selfish bastards wouldnt help a guy out. The nerve of those people.

      The NSA: The only department within the government that actually listens to you.
  • by Anonymous Coward on Sunday December 19, 2021 @03:28AM (#62096311)
    Cloud has the same plusses and minuses that timesharing had decades ago. Just at a faster baud rate.
  • by enriquevagu ( 1026480 ) on Sunday December 19, 2021 @03:35AM (#62096315)

    is no longer a thing, thanks to cloud and autoscaling. So no, it is not more fragile than before, despite eventual service drops because of cloud issues.

    • Slashdot effect is no longer a thing thanks to Slashdot being a pale ghost of its former self. You can see this reflected in the comment counts. A really slammin' discussion has like a third as many comments as it would have in days of yore.

  • by splutty ( 43475 ) on Sunday December 19, 2021 @03:54AM (#62096333)

    I thought the whole point of the cloud was to remove single points of failures, but it's looking more and more that the cloud itself is now the single point of failure.

    AWS, Cloudfront, Cloudflare. All proven to be single points of failure in the last few months.

    • by thegarbz ( 1787294 ) on Sunday December 19, 2021 @04:44AM (#62096369)

      When you clump problems together you will always have a single point of failure, especially when you consider that people design and administer the systems which divide those single points into multiple areas.

      The article's premise is that the internet is more fragile because a cloud failure takes down multiple services and websites at one go. It doesn't mention that those websites individually were typically down far more frequently before moving online.

      As a users you only care if you can visit the website. As a business you only care if the user can get to your website. The question is not "is there a single point of failure?", the question is "does AWS go down more or less frequently than I used to when I managed it myself", and the answer is an overwhelming no.

      If you just read news however you'll think it's the other way around, but that's no surprise, Slashdot doesn't run a story every time a single website goes down.

      • As a users you only care if you can visit the website.

        As a user, it is absolutely crucial that multiple websites are not down at the same time.

        If Slashdot and Reddit are down at the same, I might have to start working.
        And if Stack Overflow is down at the same time, that option is gone too...

      • It doesn't mention that those websites individually were typically down far more frequently before moving online.

        Actually, I think those websites were typically more available when they were offline. You'd go to a bookstore or a magazine shop. Once they went online, that's when they started having problems...

        (Sorry, couldn't resist...)

    • Individual clouds have single points of failures often like load balances I believe. Multi-cloud should resolve this but I don't think their are many tool suites especially designed for such. Personally hearing how one manages multi-cloud services would be interesting. Do you test latency? And primarily serve through a provider with the least latency? Roll dice with a fail over? I have never managed such a deployment.

    • by gweihir ( 88907 )

      The cloud does remove single points of failure. The problem is it does so with very complex mechanisms and hence introduces a ton of new single points of failure and some of these are easy to trigger because they are not well understood.

      A system with just a few single points of failure is easy to make reliable: Spend extra work on each of these failure points to make them unlikely to trigger and have an efficient recovery strategy in case it happens. How to do you get a system with a low number of single po

    • The more complicated a system becomes the more likely it will encounter errors. Often the backend storage for cloud services is the achilles heel. I have seen bugs in storage firmware take down the storage arrays. For bugs to get fixed, someone has to be the unlucky sod to experience it firsthand.
  • by Rosco P. Coltrane ( 209368 ) on Sunday December 19, 2021 @04:19AM (#62096349)

    Is water wet?

    How hard is it to grasp that running software locally and saving files locally is less fragile than running it with a split personality - one backend on some remote server somewhere, a user-interfacing frontend running on a VM inside a web browser inside a real computer, exchanging data through dozens of shoddily put together semi-stateful layers of web 2.0 crap protocols - the whole construct wasting fabulous amounts of CPU and bandwidth in the best of times, and just not working at all when the internet goes down?

    Not to mention, the only true reason why all this shit is being foisted on us is because Big Data is hell-bent on regaining the control they lost when the personal computer displaced the mainframe 50 years ago, and because they want ALL your data all the time. How on Earth they managed to convince decision-makers of today that this is a good idea, I'll never know...

    • How hard is it to grasp that running software locally and saving files locally is less fragile

      Holly jump to conclusions batman. Before you make a comment like water is wet, define "wet". Define the measure, the problem. Then you can talk about how good or bad a solution is. Clumping everything into cloud vs non cloud, and local vs remote is just stupid. But since you want to generalise the answer is no. Cloud services are not more fragile when you generalise, if anything they have shown to be far more reliable than your own local solutions.

      Local saving of files? Local software? How do you store it?

      • by misnohmer ( 1636461 ) on Sunday December 19, 2021 @07:56AM (#62096567)

        You don't always need "someone else's computer" in the loop, and it doesn't always add any reliability. For example, why on earth do you need the cloud for an app on the phone to connect to a home thermostat to change the local temperature?

        The proper solution is to separate functionality that requires the cloud from other functionality, but companies don't do that. For the thermostat example, your phone all should be able to connect to your thermostat directly, and you should be able to use it even if the entire internet dies. You could offer additional cloud functionality, such as connection tunneling for those who want to access their thermostat from outside the home but don't know how to setup a tunnel or VPN themselves. You could also offer "cloud settings backup" as an option for those who want to be able to change out thermostats and have their settings restored. You can even offer data collection and visualization in the cloud, but again, optional and not required for the use of the product.

        Why do companies not separate functionality which requires the cloud, and actually often force their cloud functionality as the only way to use the product? Because they want your data, and because it's easier to implement everything in the cloud instead of separating. Your music speakers, your in-home automation, your security camera, your stove, oven, fridge, washing machine, or toaster should not require the internet to function! And yet, many do (and some go down hard when that "someone else's computer" malfunctions).

        • You don't always need "someone else's computer" in the loop, and it doesn't always add any reliability.

          Indeed. I'm not sure why you're telling me this though. The core point of my entire post was to not make a conclusion based on a generality and to look at specifics. It seems like you may have missed that part of my post.

          For example, why on earth do you need the cloud for an app on the phone to connect to a home thermostat to change the local temperature?

          They don't. There's not a single smart home company that doesn't allow you to change the thermostat temperature from a local controller. If you're asking why they specifically need the cloud for the phone, well the answer is I (the people with UIDs below about 2-3million) forced them to. No

          • No one wants their phone to work only when they are on the local network, that's not smart. No one wants to battle with ports and configuration and socks proxies. No one wants to question why an app works one movement on one network but then suddenly not on another network. We sat on a laurels. We promoted the use of NAT. We broke the end-to-end nature of the internet as a network and thus isolated systems in ways to ensure that you either need to be an IT guru, or you need a centrally accessible command server to make it work.
            I don't for a moment criticise the requirement of a cloud service by a thermostat because there's no other practical way of achieving the same functionality that is accessible to everyone.

            This is a false dichotomy. You can do both. You can allow easy local access, AND remote access through a hosted service. I bet many "cloud only" devices actually do allow local access once you've authenticated with the central service, you just don't realize it because you haven't unplugged the Internet to test the app.

            Phillips HUE is easy, you send a request to the hub, press a physical button, and get a token for local API access. Two curl commands. Third curl command and you're controlling your ligh

            • You can do both. You can allow easy local access, AND remote access through a hosted service.

              Except you can't. Doing so invariably involves tradeoffs in design of your software which if you re-read my post I specifically addressed.

              Phillips HUE is a great example of a shit slow app that struggles to deal with ever changing nature of a phone being on a local network or accessing via cloud. It was specifically what I had in mind when I wrote about causing "Waiting". Many consumers don't like that idea and complaints about speed are incredibly common about the HUE especially when the network context is

              • You can do both. You can allow easy local access, AND remote access through a hosted service.

                Except you can't. Doing so invariably involves tradeoffs in design of your software which if you re-read my post I specifically addressed.

                I think your experience seems limited here. You can absolutely do both. I have personally designed products in the past which can do both, and use products everyday which do both. If you want a good example to learn how that can be done right, check out the Ubiquiti Unify products like the CloudKey or UDM. Phone app or web access via cloud, or local network. Once connected, no difference in user experience. You can even use cloud credentials (cached auth tokens if cloud is down but you have logged in from t

          • They don't. There's not a single smart home company that doesn't allow you to change the thermostat temperature from a local controller. If you're asking why they specifically need the cloud for the phone, well the answer is I (the people with UIDs below about 2-3million) forced them to.

            I have an Aprilaire 8910W thermostat which for the life of me I cannot figure out how to do what you are so certain can be done locally. It only allows me to connect to it through the app, through their cloud. Their own support people tell me there is no way to connect locally (other that ICMP ping), but since you claim every single company allows it, can you please share with me how I can operate this thermostat from a local computer without the cloud? I don't even need an app, just tell me the API's in wh

        • They use the cloud to easily get around NAT, firewalls, and dynamic IP addresses. They then leverage it to provide a path to recurring revenue.

          Cloud only should never be allowed; everything should allow for a VPN and local network to bypass that mode of operation.

    • Management today thinks they are very capable when running an AWS script for services they are supposed to provide. If you would ask them if they were capable of running whatever it is they need on a (set of) server(s) on their premise, they would actually not be capable to do so.

      Myself, I'm from the generation that managed to run our own BBS's, without degrees/masters/certifications/diplomas. Most young computer techs I meet, I would not trust them to be able to run their own DNS server. And it is not that

    • Not to mention, the only true reason why all this shit is being foisted on us is because Big Data is hell-bent on regaining the control they lost when the personal computer displaced the mainframe 50 years ago

      This is q-anon grade crazy juice, who the fuck is "they"? "Big Data", the undead pre-Internet mainframian enemy Eye Be Em, risen again as Cloudicus Amazonius, the great deceiver, devourer of data?

      You have to do it up a little or it won't be believable, like lizard people.

  • by thegarbz ( 1787294 ) on Sunday December 19, 2021 @04:40AM (#62096361)

    This is like asking if using a power grid makes electricity more fragile than running each device individually on batteries. If one device's battery dies I could use another, if the power grid goes down I can't use any. But do I really want to use another device?

    The widespread adoption of massive cloud devices has made individual web services *far* more reliable. The difference is that when Reddit goes down due to their cloud provider going down I can't just simply go to ${other_website} and keep internetting and instead need to go read a physical book. The arguments about putting eggs in one basket assume that there is some kind of inherent need to use the internet rather instead some desire to use a specific website.

    Frankly if I sat down to visit reddit. I don't give a crap if reddit is down or the entire internet is down. Either way I'm not getting my dose of reddit. But thanks to cloud providers we are long past the days where a site is knocked offline simply by having a Slashdot story posted.

    • But thanks to cloud providers we are long past the days where a site is knocked offline simply by having a Slashdot story posted.

      Slashdot couldn't take down a site on a modem any more. There's just not enough people here to produce a really quality slashdotting these days.

    • If you're running a website or some other web service (like a shopping cart or payment system), then using the cloud probably makes sense for you. The cloud, or more properly "managed servers and services", is a good solution for those who need it.

      The problem is that "the cloud" is used in lots of places where it isn't necessary. Anything that doesn't need to be connected to a web service is, by definition, more complex and fragile if it's made dependent on the cloud. TFS mentions robot vacuums not worki

      • TFS mentions robot vacuums not working because of some outage

        Don't rely on TFS. It makes you dumber. Read the article. The robot vacuums were unable to be summoned by some guy sitting on at the table with his iPhone. They never stopped working. There's no reason the person wasn't able to go over and manually push the big silver button on top of his Roomba. There was nothing "reliant" on a cloud service here. It was a value added extra bought by some slob who couldn't even be fucked to go get off his arse and push a button, and instead bitch about it in a news column.

    • by sjames ( 1099 )

      Slashdotting was almost never a matter of the server not being capable enough to serve all the requests. It was because the network connection couldn't handle it. In the era of slashdotting, frac T1 connections were still a thing. These days, even grandma has a better connection than most businesses back then had.

      The problem with the cloud is that reddit is down, and you can't just watch a movie instead because that's down too. So is your robot vacuum and your thermostat. On the bright side, it doesn't matt

  • Obviously (Score:5, Insightful)

    by gweihir ( 88907 ) on Sunday December 19, 2021 @08:13AM (#62096579)

    A distributed system is one in which the failure of a computer you didn't even know existed can render your own computer unusable.“ — Leslie Lamport
    That is from 1987 and describes the cloud exactly.

    The problem with the cloud is that it is already vastly more complex than an on-premise installation on infrastructure level only. Essentially, the cloud is a gigantic KISS violation and nothing can be done about it. Oh, sure, you can try to hide the complexity (a thing that, e.g. Microsoft is very fond of and frequently gets wrong), but the complexity is there and there will always be possible circumstances where it asserts itself. In that case, you suddenly have a much bigger problem than any you would have had with a simpler system. Hence the cloud is brittle.

    Face it: The only way to do good engineering is if some smart person can still understand the whole thing, at least up to interfaces with a clear, simple and dependable behavior. The only way for that is to scrupulously respect KISS. That is why KISS is at the core of any good engineering. That is why the UNIX paradigm of doing code and designing systems ("do one thing only and do it well") is the only approach that allows complex computer systems to be reliable and secure. And that is why much of what gets build in software today is destined for the trash-heap of history as engineering failures.

  • It hides compexity as cloud providers claim they can solve complex problems for you

  • by cloud.pt ( 3412475 ) on Sunday December 19, 2021 @09:36AM (#62096735)

    ...that doesn't use the cloud. It's impossible. All I want is a home speaker, camera, CO or temperature sensor that can work with a Windows/Linux/MacOS/Android/iOS app that doesn't require me to create a friggin account. Seriously we are losing focus on purpose because these companies decided to make us the product.

    And this isn't just true for hardware. I was looking for decent foss software to run a wiki or collaborative anything to run on my own webserver. They do exist, but the amount of focus Google gives to Jira/Confluence, Trello, and other paid tools is outrageous. The internet is going from a platform to provide knowledge towards a platform to provide publicity to paid services. And that's just awful. Even the fact piracy is pretty much dead for the masses these days goes to show that internet is no longer for people, but for companies.

    • by oGMo ( 379 )

      If you just want a device you can slap up, click a button, and use.. well, that's what you're paying extra-plus-privacy for.

      However if you want to do a bit of legwork (fingerwork?), this stuff is actually amazingly accessible. Grab yourself some Pi Zero 2 W (or Pi 4 depending on reqs), something from like the camera department [adafruit.com] or some kind of speaker/mic board [slashdot.org] or whatever you want. (There are plenty of sources and plenty of hardware variety for this stuff.. adafruit is just recognizable.)

      Want a custom voi

      • It can be easier than that-- just a Synology NAS will get 90% of the way there, or go for BlueIris if you want to do Windows.

        Getting around the cloud factor is easy enough if you do the research up front and avoid companies that make life difficult for no apparent reason (like Ubiquiti).

      • I actually have all of that, the Pi Z2W, cameras for it... the works. But I don't want ONE camera. I want a camera for multiple places, multiple for friends and family... I can't set up 20'ish Pi Z2W's for all of them. Too much cost, work, complexity, points of failure that I am not willing (and they are not capable) of maintaining for long.

        Hardware and technology have gotten cheaper, so companies just produce more convoluted ways to take your cash by selling these attached to something that makes it more e

        • I'm really surprised nobody is selling an assembled pi camera. Just not enough money in it? Not reliable enough to ship devices and expect them to work?

          • Pis, especially the Zeroes, have horrible stock as the foundation doesn't make enough of them and focuses on actually spreading them around internationally. This is great as it provides a level of equality for makers around the world, but pockets of demand exist and there's really no supply to cater to actual "products" that are supported by them.

            It's not like, say an Nvidia Jetson device, where you have an actual commercial company backing you up on industrial/consumer needs. There are literally tables for

          • Not to mention selling an assembled kit has its own challenges. There's multiple reasons why even RPi Foundation themselves only shipped something like the RPi 400 "computer kit" so recently, namely quality assurance costs, cost of maintaining a line for boards, accesories, the plastic casing and its very costly, lowly-durable molds, and most important of all providing a support chain for an assembled piece of kit which you sell to "expensive" noob users (which clog support efforts), and faults which you ca

  • Amazon and Alexa mobile apps are having troubles right this moment, and have been since about 6am CST when I woke up. Website seems to be fine though.
  • by sjames ( 1099 ) on Sunday December 19, 2021 @10:15AM (#62096793) Homepage Journal

    The other half of the problem is that the robot vacuums and pet food dispensers depend on some server on the internet in the first place. I'll go ahead and add that thermostats and door bells (with or without video) also shouldn't be dependent on an internet server.

    • But do you want a phone app? For 99% of the people, the idea of remote control is only possible with a cloud service brokering connections.

      • by sjames ( 1099 )

        Even in those cases, the food dispenser and vacuum should continue to follow their programmed schedule. They should respond to a local command (on the phone or otherwise).

        Ideally, it should be feasible for a more sophisticated user to relay remote commands through their home router (possibly using a home computer as a jump box).

        Likewise, the Ring should be willing to store video on a local NAS.

  • Why do journalists always report that companies' response to a cloud outage is to dial it straight to 11 and go to multicloud? I've yet to read about a major cloud outage with a splash radius beyond a single region. Try multiregion, first. It'll be way cheaper.

    I usually only hear about multicloud when it comes to splitting out workloads. Example: company much prefers cloud A, but cloud B has this neat feature cloud A doesn't have for this one part of their pipeline. Another Example: company much prefers clo

  • But on the other hand, you can use the cloud for redundancy and get a less fragile service.
    On "regular cases", i think the biggest advantage of multi clouding is having servers in several different regions of the world, so you can bypass grid failures.

  • It's likely that the large cloud providers can run a highly reliable service, probably better than most of their customers could. But when it fails, it fails big time and makes the news, while downtime of most of their customers wouldn't even have made a small sidenote in the bottom corner on page 12.

    Conceptually, large cloud providers are a problem. The Internet's resilience is based on the fact that individual parts may fail, but the whole stays up. Whenever someone becomes so central that their failure t

  • ... making the internet more fragile?

    At the end of the day, the cloud is a bare metal - its computers in server rooms.
    It has taken on an almost mythical status in the minds of people who just don't understand.

    The evolution has happened, due to the sheer volume of services going online - and in fact, internet services are far less fragile than they previously were.

    The issue is too many eggs in one basket - and that absolutely creates single points of failure.

    The future is likely to be peer to peer - complete

  • Having things so centralized means there are a lot fewer possible points of failure, but if one of them fails, the effects are bigger. In the old days, each company ran its own web server for its own site. They often weren't very robust. Servers went down all the time, but they only affected that one site.

    Amazon, Microsoft, and Google put way more resources into building and supporting their infrastructure. If you run your website on one of their clouds, it will probably have much higher uptime than if

  • The bosses have somebody to blame and the homeowner isn't going to buy a new lightbulb.

Intel CPUs are not defective, they just act that way. -- Henry Spencer

Working...