Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
AI Microsoft

'Copilot' Price Hike for Microsoft 365 Called 'Total Disaster' with Overwhelmingly Negative Response (zdnet.com) 129

ZDNET's senior editor sees an "overwhelmingly negative" response to Microsoft's surprise price hike for the 84 million paying subscribers to its Microsoft 365 software suite. Attempting the first price hike in more than 12 years, "they made it a 30% price increase" — going from $10 a month to $13 a month — "and blamed it all on artificial intelligence." Bad idea. Why? Because...

No one wants to pay for AI...

If you ask Copilot in Word to write something for you, the results will be about what you'd expect from an enthusiastic summer intern. You might fare better if you ask Copilot to turn a folder full of photos into a PowerPoint presentation. But is that task really such a challenge...?

The announcement was bungled, too... I learned about the new price thanks to a pop-up message on my Android phone... It could be worse, I suppose. Just ask the French and Spanish subscribers who got a similar pop-up message telling them their price had gone from €10 a month to €13,000. (Those pesky decimals.) Oh, and I've lost count of the number of people who were baffled and angry that Microsoft had forcibly installed the Copilot app on their devices. It was just a rebranding of the old Microsoft 365 app with the new name and logo, but in my case it was days later before I received yet another pop-up message telling me about the change...

[T]hey turned the feature on for everyone and gave Word users a well-hidden checkbox that reads Enable Copilot. The feature is on by default, so you have to clear the checkbox to make it go away. As for the other Office apps? "Uh, we'll get around to giving you a button to turn it off next month. Maybe." Seriously, the support page that explains where you can find that box in Word says, "We're working on adding the Enable Copilot checkbox to Excel, OneNote, and PowerPoint on Windows devices and to Excel and PowerPoint on Mac devices. That is tentatively scheduled to happen in February 2025." Until the Enable Copilot button is available, you can't disable Copilot.

ZDNET's senior editor concludes it's a naked grab for cash, adding "I could plug the numbers into Excel and tell you about it, but let's have Copilot explain instead."

Prompt: If I have 84 million subscribers who pay me $10 a month, and I increase their monthly fee by $3 a month each, how much extra revenue will I make each year?

Copilot describes the calculation, concluding with "You would make an additional $3.024 billion per year from this fee increase." Copilot then posts two emojis — a bag of money, and a stock chart with the line going up.

'Copilot' Price Hike for Microsoft 365 Called 'Total Disaster' with Overwhelmingly Negative Response

Comments Filter:
  • by coop247 ( 974899 ) on Saturday January 25, 2025 @02:43PM (#65118121)
    That the calculation was actually correct.
    • by gweihir ( 88907 ) on Saturday January 25, 2025 @02:50PM (#65118129)

      There is probably a Computer Algebra system in the background. ChatGPT, for example, hands off calculations to Wolfram Alpha, because it cannot to them.

      • Shh! There's no man behind the curtain, Dorothy!
      • Machine intelligence should always use a CAS or have one available, no matter how advanced they are, for the same reason a human would. Neural network cycles are better spent elsewhere, on things a calculator can't do.

        • It is not enough for a system to have access to a CAS. It must also know what to do with it and understand when an output is nonsensical or surprising. You're shifting the problem difficulty to a higher level, instead of addressing it and solving it.

          In the human world, we do this task using components called mathematicians. One of their characteristics is that they are capable of doing the CAS work themselves with pen and paper, if required. We also know of components called first year students which often

          • by gweihir ( 88907 )

            Essentially, you can only replace a CAS with a CAS at this time and for the foreseeable future.

    • Must not have been not running on a Pentium :)

  • by gweihir ( 88907 ) on Saturday January 25, 2025 @02:52PM (#65118133)

    So, people do not want to pay for AI? There goes another potential business model.

    • So, people do not want to pay for AI? There goes another potential business model.

      Not to be blatantly and repetitively obvious, but how did Bill and others in the past respond to free alternatives?

      *sets stopwatch* /s

  • by Geoffrey.landis ( 926948 ) on Saturday January 25, 2025 @03:12PM (#65118161) Homepage

    Agree; I don't want to pay for it; I don't want it at all.

    and I wish they would stop calling large language models "Artificial Intelligence," too. It's not.

    • by jhecht ( 143058 )
      Once they Microsoft figures that out, they will start charging customers to delete the AI.
      • Once they Microsoft figures that out, they will start charging customers to delete the AI.

        "Copilot, delete yourself."

        "I'm sorry, Kevin; I'm afraid I can't do that... for less than $200." /s

      • Copilot is Clippy gone mad.

    • Agree; I don't want to pay for it; I don't want it at all.

      and I wish they would stop calling large language models "Artificial Intelligence," too. It's not.

      I want to stay on point, but it's more about NAMING for idiots or impatient.. Your comment is dead-on; now mine is: Stop calling it the fucking "cloud". "Datacenter" or "groups of datacenters". Maybe even "things outside of your little world". NOT THAT HARD. Or does buying space above others (a cloud?) have a back-of-the-mind empowerment to it? Nah, it originated from a guy talking to management who didn't understand tech, dumbing it down to "cloud". Run that parallel with the impatient that just want

      • Your comment is dead-on; now mine is: Stop calling it the fucking "cloud". "Datacenter" or "groups of datacenters". Maybe even "things outside of your little world". NOT THAT HARD.

        "Somebody else's computer."

        • Your comment is dead-on; now mine is: Stop calling it the fucking "cloud". "Datacenter" or "groups of datacenters". Maybe even "things outside of your little world". NOT THAT HARD.

          "Somebody else's computer."

          More politically correct. :)

    • Its a major problem with a subscription software model, people are forced to pay for a feature whether the want it or not. You remove a major market force that forces software companies to actually make things that customers want.

  • How to opt out (Score:3, Informative)

    by pcr_teacher ( 1977472 ) on Saturday January 25, 2025 @03:35PM (#65118215)

    Scroll down to the bottom of this link:
    https://bgr.com/tech/microsoft... [bgr.com]
    or
    the bottom of this link:
    https://www.reddit.com/r/Offic... [reddit.com]

    • Re:How to opt out (Score:5, Informative)

      by pcr_teacher ( 1977472 ) on Saturday January 25, 2025 @03:36PM (#65118217)

      If you're using Office 365 Personal, one user reported a method to opt out:
      "If you're using O365 Personal it looks like you need to 'cancel' your subscription - when you hit cancel you have the option to change to 'classic (without AI)' - at your next renewal you will lose all AI features for Office and save about $50"8.

      • I can confirm this method works. Going into my O365 family plan it gave me no option to switch plans, however after clicking cancel it showed the ability to switch to the family classic plan, saving the $30/yr. It’s obvious why they hid this, nobody is going to pay $30 for a feature they will never use.
      • As long as you are not on a discounted plan though.

        I pay annually, I got a discount though signing up via a work link.

        I checked the cancel option out and I would have to pay more to opt out (50p but hey!).

  • I asked ChatGPT how an Ai company could sell their services, and it said:

    1. Offer Clear Value Propositions
    2. Use Freemium or Trial Models
    3. Showcase Results & Case Studies
    4. Provide Ongoing Support and Training
    5. Targeted Marketing & Outreach
    6. Offer Customization and Scalability
    7. Provide a Strong User Experience
    8. Build Trust and Credibility
    9. Offer Tiered Pricing
    10. Provide Ongoing Innovation
    11. Leverage Network Effects

    Oddly, it didn't say "bundle it with other services that peop
  • by Tony Isaac ( 1301187 ) on Saturday January 25, 2025 @04:29PM (#65118307) Homepage

    I just tried it in Excel for the first time. There was a notice that said I had Copilot added to my license (though extremely limited, just 60 prompts per month). I had just opened a spreadsheet that was a tab-delimited text file, so all the text was pulled into the first column. I selected the text and asked Copilot to expand the tabs to columns, you know, like the Excel Text-To-Columns feature. It helpfully told me what to click to accomplish the task.

    BUT that's not what I wanted. I knew how to do the text-to-columns thing. I wanted it to just...*do* the task.

    As is, Copilot in Excel and Word is not really any better than going to the web version of Copilot, then copy/paste what it says into the document.

    If you could direct Copilot to actually *do* tasks for you, then I just might be willing to pay $3 per month extra.

    • That's a can of worms, because *doing* the task unsupervised is more dangerous. When *you* do the task it's clear who fucked up if there's a problem. Also you can use your brain to make sure the task is reversible.

      When *Microsoft* does the task for you by AI, who fucked up if there's a problem? Will you even check the output to find the problem if there is one? What if the problem shows up two weeks later? Is the task even reversible at all (eg if you expand tables to columns + immediately overwrite the o

      • This problem isn't really any different from any other kind of automation, AI or otherwise.

        Remember when WordPerfect was king, and had a "reveal codes" feature so you could manually tweak the underlying tags in case the UI did it wrong? Yeah, same problem. Ultimately, word processors got better, and these days no one cares that you can't see the underlying tokens that make up the document, because the UI does it right.

        AI will go through the same maturation process.

        • There is a difference, at least on the engineering level. A traditional kind of automation has a specification, with test cases and guarantees of correctness against the specification. Even if the specification is informally used by the developers, it's there. The specification is effectively a contract that guarantees that users receive what they expect.

          An AI output response has no specification. It's impossible to specify outputs in the way the current AI architectures are implemented and composed. The

          • I think you misunderstand how AI models are created. They can indeed be tested and errors can be corrected.

            https://www.oracle.com/artific... [oracle.com]

            The entire training process is literally feeding the AI with inputs, examining the outputs for accuracy, and making adjustments until success criteria are met. Yes, it's a really complex machine, but it is *not* un-knowable or incomprehensible or untestable.

            • The entire training process is literally feeding the AI with inputs, examining the outputs for accuracy, and making adjustments until success criteria are met

              That is literally what I just told you, at the lower level of the mathematical nuts and bolts (I skipped over reinforcement learning ideas, but I don't think we need to go there just yet)

              Unknowable they currently are, it is an ongoing research program to figure out what is actually going on as a mapping from the input signals to the output signals.

              • You say lots of words that make it seem that you know what you're talking about, but then you draw conclusions that make no sense.

                We would not have AI being deployed on a massive scale, if it were not possible to control and test and adjust the output. While there are certainly lots of issues, these reflect the immaturity of the technology, not the inability to test it. Self-driving AI is already significantly better than human drivers, and continuing to improve. Radiology AI has been shown to outperform hu

                • We've discussed these mammogram papers here on slashdot before.

                  You seem to think that adjusting the output is the same as controlling and testing, and deployment is ipso facto proof of quality. I've given you plenty of details without equations or jargon that lead to different conclusions. It's up to you to take them on board or dismiss them from your own mental picture. Don't worry, I won't be upset if you're not convinced.

                  • You seem to think that adjusting the output is the same as controlling and testing

                    Yes indeed I do.

                    deployment is ipso facto proof of quality

                    No that doesn't follow in any way, any more than your conclusions follow the facts you spouted.

                    I would say that successful deployment with demonstrably good results is proof of quality. This is true with all engineering.

                    AI tuning may use different tools to make adjustments, but the underlying process of improving quality is the same. In every engineering endeavor, this are lots of layers of things we don't understand, but that's OK, we only need to understand what is necessary to achieve the

                    • I'm not here to tell you that your LLM isn't performing up to your own standards. If you're happy, more power to you.

                      What helps engineering however is well posedness. Continuous dependence of the outputs on small changes in the inputs. That is entirely not a given in deep models. If you're taking a foundational model and fine tuning your own variant, you're making limited scope changes to a mostly static large system.

                      When foundational models are upgraded, you are likely to see the new model making fewer

                    • Thankfully, it's not me trying to engineer these models, it's big companies with lots of money to spend. The reality is, they have developed a product that is, despite its relative immaturity, extremely useful to a lot of people. And they're constantly getting better.

                      When I first tried to generate images using Copilot, it didn't know how to spell, at all. I asked it to make a sign offering free hot dogs. The images it created were nice enough, but the words "free" and "hot" and "dogs" in the image, were alw

    • More generally, copilot , or any other LLM, is no better than searching online, because the AI "intelligence" consists of the pilfered websites that make up the search results. If people stop contributing to great sites such as stackoverflow then no more training data for AI. Same for generative AI, if artists stop publishing work that can be pilfered for AI training data, then all future generated content will look essentially the same as the current. The one place I'd like AI would be to improve the shitt
      • First, I disagree that AI is no better than searching online. This is because, with the old way, I would have to search for something, then click each link that looks interesting, scan the page to see if it's useful, and then digest the data. AI does all those steps for me, saving me a ton of time. It's so good at it, it makes even Bing a usable search engine!

        Second, AI is coming, whether you like it or not, whether it kills off old sites like Stack Overflow or not. And it is indeed killing Stack Overflow,

        • by fluffernutter ( 1411889 ) on Sunday January 26, 2025 @06:30AM (#65119285)
          But as he said, if stack overflow dies so does AI. When Apple releases the next x code and everything is different, AI wont have a clue unless people update answers on stack overflow. These AI companies want to act like they are now at the top of the information chain but in fact they are at the bottom because they rely on every single website still being updated manually. They are like a parasite that kills its host.
          • That's a pretty dramatic statement about the importance of Stack Overflow! I'm pretty sure AI companies will find new sources of information. You know, like maybe GitHub repos.

            • I didn't mean just Stack Overflow... I meant that people need to write about it on the web for the AI to regurgitate it. And I'm not talking about changes in development, I'm talking about changes in the user interface.. Methods to create keys and debug and stuff like that.
              • Well yes, of course. People will still need to write about such things, but the ratios will change.

                In the Stack Overflow world, every programmer who ever had an issue, might have become a Stack Overflow contributor. I'm one of those.
                In the AI world, it will be more the "influencers" and authors who will end up continuing to write stuff. Not as big a pool, but probably big enough.

                Also, I've found that AI can digest official documentation and use it as a source. Stack Overflow was full of people who had no ti

                • It's the "probably big enough" that I disagree with. People aren't going to write things if there is no one to give them upvotes. Though I don't understand the mad desire for upvotes, that is what made SO work. Why will people write things just to be input into AI and that no one will really see directly?
                  • Here's the thing. We're going to still need human experts, despite AI.

                    AI can do the easy stuff. This is as it always is with automation. But when anything is unexpected or unusual, AI still needs an expert human hand to guide it.

                    As an example, just the other day I asked GitHub Copilot to convert a SQL UPDATE to an UPSERT. It helpfully converted my statement to a MERGE statement that looked correct. Big time saver, because MERGE statements are not the easiest to write. But it got it wrong. It missed joining

                    • So this pretty much defeats the purpose of AI. You probably spent more time debugging the answer than it would have taken to do in the first place. Or at least pretty close, so how much time did it really save you?
                    • I disagree, it does not defeat the purpose of AI.

                      AI is like an apprentice, not a journeyman. A plumber's apprentice can be a big help to a journeyman. But the journeyman knows he must supervise the apprentice to ensure quality and to ensure that the right things are being done.

                      In that one instance, AI got it wrong, and it cost me time (you know, like when an apprentice messes up). But in 90% of cases, it does the right thing, and I sit back and enjoy not having to do the grunt work myself.

                    • You've clearly never actually worked with an apprentice.

                    • I would never work with someone who exposed me to the liability of making a mistake for something I could have done myself. That's a significant increase in risk and stress.
                    • Then you should never be a teacher.

                      But whatever an apprentice can or can't do, AI does not replace an experienced programmer. It's just a fancy auto-complete, when it comes to programming. Just as you can't trust autocomplete, you can't trust AI. You've got to know what you wanted, and whether the code it spits out is what you wanted.

                      Same is true for what people used to find on Stack Overflow.

                    • A teacher gives lessons in a classroom setting or in a lab. That's not the same thing.
                    • Whatever, you're nitpicking definitions and avoiding the point about AI.

                    • You are the one that tried to use the world assistant then moved the goalposts.
                    • I used the word *apprentice* not assistant.

                      You claimed that the word was incorrect, but if you look up the definition, it is not. There is no requirement that an apprentice already have skills. In fact, generally, apprentices do not have the necessary skills, that is why they are working under a journeyman. They start out being a gopher, and move up when they have learned the necessary skills.

                      So no, I did not move the goalposts, you did, by claiming that an apprentice must already be experienced. That's bal

                    • But then you compared it to being a teacher which is totally different, so you lost me.
                    • When you are a journeyman working with an apprentice, you ARE a teacher.

    • ...you know, like the Excel Text-To-Columns feature. It helpfully told me what to click to accomplish the task.

      BUT that's not what I wanted. I knew how to do the text-to-columns thing. I wanted it to just...*do* the task....

      Not to butt in, but that kicks my mind into "now we can get rid of support staff, and maybe next have all online answers since 2007 until today removed and let Copilot be 'Bob.'"

      Side note that's just funny: a former employee of M$, Dave Plummer, was given the task of creating an online activation/offline method and working code for WinXP. The original key of some sort for leaked and a new method was needed. He used the install media from MS Bob to generate (post encryption) the signature of whether media

      • I remember Bob. The underlying concept was interesting, it just was way too shallow. You might say it was a mockup of what AI might one day be able to do, but at the time, the technology just wasn't there. And nobody really got the whole "picture of an office desk as UI" thing.

        I'm not actually sure what your point is. All new technology draws from old technology.

        • I think the point, poorly made, was old technology serving a completely different purpose later. In this case compiled code with a futuristic purpose in mind became fodder for a completely different concept later on; spare parts being used to make a small piece of something new, if you will.

  • Dark patterns (Score:5, Insightful)

    by GrahamJ ( 241784 ) on Saturday January 25, 2025 @05:08PM (#65118405)

    I just logged in to go change my O365 plan to the "classic" (non AI) one that I actually signed up for in the first place, and if I hit "Change Plan" under the subscription it only gives the option to switch to monthly from yearly.

    To actually do what I want I had to click "Turn off recurring billing" and then I got the option to switch plans.

    So they change me to a more expensive plan without asking then make it unintuitive to find the way back to what I had before.

    So fucking shady. I'm switching to monthly and cancelling once I get all my data out of there.

    • Re:Dark patterns (Score:5, Informative)

      by GrahamJ ( 241784 ) on Saturday January 25, 2025 @05:10PM (#65118413)

      Oh and to actually turn off recurring billing on that page you have to scroll all the way to the bottom, past the other plans and marketing, and click "I don't want my subscription" in gray next to the bright blue "Keep my subscription" button.

      Of course to turn it back on it's right at the top of the main page.

      Shady Shady Shady

      • by GrahamJ ( 241784 )

        One more thing: Not only is the AI they're sneakily trying to get you to pay for not unlimited (it's a crappy credit system), the credits aren't shared with family members with the family plan *facepalm*

        • One more thing: Not only is the AI they're sneakily trying to get you to pay for not unlimited (it's a crappy credit system), the credits aren't shared with family members with the family plan *facepalm*

          It's cuz they're keeping your data secure. They have no way of knowing about any other data in any other place because of the heightened security. Family plans are especially vulnerable to attaching data points and creating small models to work with, which is something that is unacceptable, hence they are more complex to work with to avoid exploitation. Yup. /s

  • by godrik ( 1287354 ) on Saturday January 25, 2025 @05:22PM (#65118433)

    is the core of the problem.
    I occasionally use chatgpt, the free models. I use it for things I guess it can do. Or where it's gonna take me time to do it, I can give a shot to the AI model to see if it saves me time.
    But from what I have seen, they paying models aren't likely to solve me problems that the free ones can't solve often enough to justify the expense.

  • We could get a plan without AI shit in it? What do I need it for? I use ms365 primarily for onenote, for taking notes. Sometimes Word, but using word these days is basically shooting yourself in the foot when it comes to transferring it to anything modern or web based

  • If you are in bed with Microsoft you know what is going to happen to you, sooner or later.
  • No one wants to pay for AI...?

    You mean "No one wants AI..."

    Fixed that for you

  • "The feature is on by default, so you have to clear the checkbox to make it go away."

    Does it really "go away", though?

  • That would be the sound of the bursting AI bubble
  • by jenningsthecat ( 1525947 ) on Saturday January 25, 2025 @10:43PM (#65118889)

    Bad idea. Why? Because... No one wants to pay for AI...

    365 is the de facto office suite standard, and MS knows that people - and especially companies - won't be leaving in droves for Libre Office. They've turned MS Office into a rented product, and people will pay the rent demanded and take whatever they're given because they don't have the spine to go Open Source and endure short-term pain for long-term gain. They'll bitch, but they'll pay, and Microsoft will continue to make out like the bandits as they always have.

    • I have LibreOffice installed. I use it to edit PDFs. That's it.

      I pay for an Office group subscription over the permanent license because then I can just say to my employees who get a new computer, "Here's the login and password." Then they're up and running. I pay for Office at all because it's got a better track changes interface and better autocorrect for my typing mistakes. I also pay for Google workspace for the email. I pay for Zoom because my clients and employees know how to use it, and when I

  • I know it’s fashionable to shit all over Copilot and AI these days but I have one valuable use case from my experience.

    My company started rolling out Copilot to a few people and it’s now coming to everyone. I haven’t had a chance to try whatever features they’ve added to the Office apps (and I agree a lot of it seems dubious) but we have started using the AI generated notes/summaries of Teams meetings and we’ve been really impressed by it.

    Not saying it’s worth an extra 30

  • by CptJeanLuc ( 1889586 ) on Sunday January 26, 2025 @04:29AM (#65119187)

    I use a lot of Excel/Word/Powerpoint documents at work. Somehow Clippy has appeared during the last weeks or months. It's just too frequent that I am about to do some fully automated routine task that I want to just churn through without applying higher executive functions, and Microsoft throws a small pop-up "did you know" window at me, which I have to read and hit "skip", "I get it", or something like that. And it has _never_ been a pop-up promoting functionality which I find useful or interesting. It's the same for Outlook, OneNote, Edge.

    This add to the general annoyance of 365 always being a cloud thing. I have a bunch of files I want to work with locally, that I need to frequently open and close. This used to be a quick process. But nowadays? Office needs to have a _long_ think before opening a document. Also, Excel has the annoying behavior that if you have minimized workbooks and you open another .xls file, then for some retarded reason all open workbooks pop back up.

    There is not much I can do about my employer's choice of IT ecosystem, and it's a sort of "go with the flow rather than fight it" kind of thing (though Outlook's task management system has now detoriated to the point I went deep into emacs, org-mode and git instead). And what is the employer going to do? They are already stuck in the 365 ecosystem, all the infrastructure is there, it's what all employees know and use - I imagine they will just have to eat the price hike. Luckily, the "but it's now got lots of extra AI" gives them a way to somehow embrace a collective lie to justify this expense, that "AI" is becoming ever more important, and we are now an AI enabled organization, proofing the future, or whatever.

    I have to live with the M$ abuse at work, but at least I'll mostly be rid of the company at home before the end of the year, when my old but still going strong stationary Windows 10 PC is no longer supported, for silly TPM chip reasons. I expect I will be thinking fondly of M$ whenever I cannot play a Windows-only Steam game, or I cannot use Windows-only software for making music.

  • But is that task really such a challenge...?

    The summary poses a question of whether putting images into a powerpoint is a challenge. That is completely the wrong question. If it was a challenge you wouldn't use CoPilot for it. The question was if it was time-consuming busy work which could be automated. The answer for that is yes.

    I have been forcefed Copilot at work, and this kind of stuff is precisely what I do use it for. Not because I can't do it, but because I don't want to sit there for 30 min manually feeding one image after another into a powe

  • First they implement unwanted features, then they let you pay for it through the nose. Nothing new here, move along.

    If you're unhappy with Microsoft, there's always alternatives, which are a lot cheaper.

    • First they implement unwanted features, then they let you pay for it through the nose.

      FUD. You can convert to classic and avoid the price hike.

      But that doesn't fit our narrative, now, does it.

Thus spake the master programmer: "Time for you to leave." -- Geoffrey James, "The Tao of Programming"

Working...