Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
AI Technology

AI Won't Replace Human Workers, But People Who Use It Will Replace Those Who Don't, Andrew Ng Says (businessinsider.in) 109

An anonymous reader writes: AI experts tend to agree that rapid advances in the technology will impact jobs. But there's a clear division growing between those who see that as a cause for concern and those who believe it heralds a future of growth. Andrew Ng, the founder of Google Brain and a professor at Stanford University, is in the latter camp. He's optimistic about how AI will transform the labor market. For one, he doesn't think it's going to replace jobs.

"For the vast majority of jobs, if 20-30% is automated, then what that means is the job is going to be there," Ng said in a recent talk organized by Chulalongkorn University in Bangkok, Thailand. "It also means AI won't replace people, but maybe people that use AI will replace people that don't."

This discussion has been archived. No new comments can be posted.

AI Won't Replace Human Workers, But People Who Use It Will Replace Those Who Don't, Andrew Ng Says

Comments Filter:
  • a bit tired of hearing this. it's well proven that it can save some time in certain areas, so if you're not using it where you can, you will not be as effective, duh

    • Just like any other tool

    • Re:old news (Score:5, Insightful)

      by dvice ( 6309704 ) on Monday July 29, 2024 @04:28PM (#64665132)

      Tractors won't replace farmers, they just replace farmers that don't use tractors, which in turn means that majority of farmers will lose their job.

      So is this some kind of word play? AI won't replace workers, the workers become unemployed just because there is not enough work for them left?

      • by linuxguy ( 98493 )

        > AI won't replace workers, the workers become unemployed just because there is not enough work for them left?

        There is plenty of work left to do. That will not change.

        Historically the machines have helped us become more efficient at what we do. That has never meant that there was less work to do. Overall the amount of work that needs doing continues to go up. However, the need to do certain type of work continues to go down. For example horse carriage repair. Or writing computer programs in assemb

        • and honestly the way forward is usually pretty intuitive. Like why would anyone want to waste time doing things the slow way? Let compilers do away with assembly. Let AI do away with boilerplate and reference manuals.

          • The problem is reference manuals need to be correct. Ai isnt correct. It makes shit up. If AI screws up your rrference manual today and you dont notice it for 4 years the peoplw after you will blame you not the AI.

            • by jythie ( 914043 )
              Yeah, the insidious thing about these AI coding systems is that they try to _look_ correct, not _be_ correct. They are more like scammers than anything else, trying to get things close enough that the problems are hard to spot so they can get their reward.
            • good point - rather than "doing away" with them, just "assist with finding" things in them. AI does slow me down at times when dealing with not-super-popular libraries

        • There is nothing (at the moment) an AI tool can do better than I can do myself.

          There are enough tools in any IDE, to develop anything reasonable quick.

          • by dvice ( 6309704 )

            > There is nothing (at the moment) an AI tool can do better than I can do myself.

            You can do what AlphaFold2 can do better? Why didn't you compete against it like all the other scientific groups?

        • Tell that to all the workers that lost their job to automation and (factory)robots. If someone looses their job due to automation, it doesn't mean he/she is qualified/capable of doing a job which still hasn't been automated or even can be trained to do such a job.
    • well sure - if you are not using it in an area where it is *proven* to save time, you will be competitively phased out.

      But like many other technology bubbles, much of the innovation that is "known to be improved" does not map to productivity and will be a monumental waste of time and opportunity cost for people betting on this being the future. And in this case a real human cost for normal workers caught in between bad bets.

      Sometimes its a mix with a net positive - your win95/xp desktop theme skillz are not

  • Breaking News (Score:5, Insightful)

    by 93 Escort Wagon ( 326346 ) on Monday July 29, 2024 @02:36PM (#64664744)

    Academic closely tied to Google holds opinion that aligns with the best outcome for his Google stock options.

    • by gweihir ( 88907 )

      Yep, this is exactly this. This asshole probably does not even notice what he is doing.

      • by GoTeam ( 5042081 )
        I like the threat he throws in:

        It also means AI won't replace people, but maybe people that use AI will replace people that don't.

        Adopt our technology or be crushed by the wheels of progress! Salesmen never change.

        • by gweihir ( 88907 )

          Yeah, classical immoral and despicable fear-based pressure tactics. This nicely nails it down where he actually comes from.

        • by quall ( 1441799 )

          And yet, it's completely true and a valid warning. Will people ignore it?

          • Re:Breaking News (Score:4, Informative)

            by narcc ( 412956 ) on Tuesday July 30, 2024 @01:44AM (#64665918) Journal

            That seems unlikely.

            If you haven't noticed, the goal posts have been on the move since this ridiculous craze begin. Remember when rapidly advancing AI was going to completely disrupt every industry leading to mass unemployment in just a few months? First it was going to replace you. Then it was going to make you more productive. Now it's something you need to learn to use if you want it to make you more productive. In the not-too-distant future, it'll be "it will make you more productive some day".

            No matter how much you want to be different, reality has a way of cutting through the bullshit. OpenAI looks to already be on the brink of bankruptcy. [windowscentral.com]

            LLMs are neat, but they're toys, not tools. If you think they're making you more productive, it's only because you're more focused on work because you're having fun playing with your new toy. As the novelty wears off, you'll find yourself doing more on your own before you finally stop wasting time trying to guide the stupid things to the answers you've already figured out.

    • by dfghjk ( 711126 )

      Just like Tesla fanboys on /.

    • Re:Breaking News (Score:5, Insightful)

      by thegarbz ( 1787294 ) on Monday July 29, 2024 @03:20PM (#64664896)

      That doesn't mean he is wrong. When was the last time you saw a farmer drag a plough behind a horse. The reality is a significant portion of our work is meaningless nonsense that can be covered by AI. There will be those who use it, and those who don't. Those who do will work more efficiently. Guess what will happen at the upcoming performance review...

      You can already see that going on right now. I gave an example previously of how I offloaded to Copilot to write my mid year summary (something no one reads but we are forced to do anyway). That freed up a good 45min of my day.

      Today I got another example. I just got back from holidays. I have an email inbox with 700 unread messages and towards the top was a question that was part of a chain some 15 emails deep of which I was CC'd on the last 8 or so. There's a lovely "summarise with Copilot" button that quickly distilled the entire conversation to 2 key points. Probably did in 1 minute what would otherwise have taken 5+.

      • Re:Breaking News (Score:4, Interesting)

        by cayenne8 ( 626475 ) on Monday July 29, 2024 @03:41PM (#64664964) Homepage Journal

        You can already see that going on right now. I gave an example previously of how I offloaded to Copilot to write my mid year summary (something no one reads but we are forced to do anyway). That freed up a good 45min of my day.

        Just curious, is Copilot 100% completely internal to your company and its computers, or does it touch outside, the greater internet or cloud in any way?

        I would not DREAM of putting any information about my job, my work and especially not any details of projects and work going on at my place of work...for fear of them leaking out to external and potentially competitive companies or governments out there....

        I mean, for your summary, I'm guessing you detailed what you did, involving names of systems, programs, etc.....does that not scare you of potentially being consumed by an external entity and potentially distributed in some form or fashion (via training data) to other potential competitors out there?

        I should think that any company with any sort of proprietary code or production would limit or prohibit using AI resources housed in any form outside the company's computer/server boundaries they own and manage.

        • by Shaitan ( 22585 )

          You'd think they'd be doing that with ALL cloud technologies for more or less the same reasons but most companies don't.

        • Copilot is totally internal to company. That is one of Microsoft's claims to fame.
        • Just curious, is Copilot 100% completely internal to your company and its computers

          Internal only. Also the Copilot does not index any documents or email communications beyond a certain classification.

          As for the external entity bit, a large portion of corporate life is shared with external partners as it is. My performance review system (the one I had copilot filled out) outsourced. My HR system, outsourced (a 3rd party can't just see what I work on, but my name, personal address, bank details, pay etc). Most of the large companies that pay the Microsoft tax load highly confidential items

      • by Shaitan ( 22585 )

        "That doesn't mean he is wrong. When was the last time you saw a farmer drag a plough behind a horse."

        When was the last time you saw a farmer? Even if you live in farm country the answer is likely that it has been awhile. There is no question that farmers who use automation replaced farmers who didn't; glaring issue that each one of them replaced THOUSANDS of farmers to the point where the job now nearly extinct. AI will extinct half the job titles and will make the people holding thousand of job titles end

      • by jbengt ( 874751 )

        I have an email inbox with 700 unread messages and towards the top was a question that was part of a chain some 15 emails deep of which I was CC'd on the last 8 or so. There's a lovely "summarise with Copilot" button that quickly distilled the entire conversation to 2 key points.

        I'm not saying that your e-mails were worth any of your time, but that sort of summarizing is something people do all the time that gets them in trouble because they don't understand the details of the situation. I doubt that LLMs

        • The question is what you do with the information. A summary is just that, a summary. It's there to provide you the necessary information of whether to delve deeper into the topic or whether it can be ignored.

          In this specific case it could be ignored. But I've used it other times as well, and in another case the summary provided made little sense (which is not surprising because in the email trail a few people were confused as well). In that case I then went through and read the entire trail.

      • Ironically the push to automate the entire information industry may very well lead to people having no choice but to indenture themselves as field hands to those that are rich enough to own a historic recreation of a Victorian era nobleman's lifestyle that they go to host Bridgerton parties a couple of times a year. Once there is no longer any need for chair-bound keyboard twiddlers you will all be working with your hands and getting dirty just like me (a former chair bound keyboard twiddler that fortunatel

        • a former chair bound keyboard twiddler that fortunately found himself downsized and ended up in the job that he should have always had, a janitor

          Good fucking lord, get some help and stop wishing desperation on others.

          • Dude, I'm totally serious when I say that.

            I spent my 20's and 30's in small rooms filled with whirring computers lit only by monitors doing all the keyboard twiddling required to turn TV shows and B movies into DVDs for 40-60 hours a week. After the first year it became a monkey-push-the-button job for me, but I couldn't manage to wrap my head around the more lucrative IT stuff and move up the food chain (turned out I had an undiagnosed genetic disorder that expressed with ADHD and some other symptoms) . Wh

      • by narcc ( 412956 )

        There's a lovely "summarise with Copilot" button that quickly distilled the entire conversation to 2 key points.

        You must like taking risks. Despite the hype, AI is really, really, bad and producing accurate summaries.

        • There's a lovely "summarise with Copilot" button that quickly distilled the entire conversation to 2 key points.

          You must like taking risks. Despite the hype, AI is really, really, bad and producing accurate summaries.

          People are no better (literally taking a break right now from an email chain where someone sumarised a technical discussion... incorrectly) The point of a summary isn't to determine technical content and make a decision, it's to determine whether something is important and worth reading or not. An incorrect summary often falls into the latter term. "The decision was made to do X because of Y". That's a summary. It tells me that a decision was made. It tells me X and Y. Now if X and Y seem to be unrelated or

          • by narcc ( 412956 )

            Your faith is seriously misplaced. When Copilot tells you "The decision was made to do X because of Y" you can't even trust that a decision was made at all, let alone that a decision was made to do 'X'. All of the output is suspect. Consequently, you can not accurately judge the importance of an email chain on the basis of an AI summery.

            Yes, people make mistakes. The nature and type of those mistakes, however, is of an entirely different quality. Unlike LLMs, people can reason and understand. They're a

      • I offloaded to Copilot to write my mid year summary (something no one reads but we are forced to do anyway). That freed up a good 45min of my day.

        So, you have a job because someone created meaningless work for you to do. Now that AI is doing your job for you, prepare to be right-sized. 8)

    • Academic closely tied to Google holds opinion that aligns with the best outcome for his Google stock options.

      Pretty much lines up with my opinion that Google and other 'AI' companies know their product is really shitty, and just hype and market the hell out of it so they can make money on it anyway.

  • by Baron_Yam ( 643147 ) on Monday July 29, 2024 @02:37PM (#64664748)

    AI isn't quite ready yet, but it's on the threshold of being able to replace a lot of jobs. It's happening more broadly and rapidly than we know how to handle.

    Sure, people using AI tools will replace those who don't, but the ratio will not be 1:1.

    • need redefine full time and cap extreme OT.

      What good is AI if it takes an job from 4-5 full time people (mostly 40 hour weeks) with the very few (50-60 weeks) down to 1-2 full time people that need to pull 50-60 hour weeks (most of the time) with the need for the few (80+ weeks).

    • by gweihir ( 88907 )

      Not really. General LLMs will not be able to replace jobs in any meaningful way. Specialized LLMs can replace jobs, but they do so slowly. It also turns out that, for example, coders doing more than boilerplate find AI unhelpful and decreasing their efficiency.

      The sad truth is, that, again, the AI research community has massively overstated its results and massively under-delivered. "Botshit" (https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4678265) can be used for low-level, no-decision-power desk-jobs

    • All of the world's computers remain unemployed to this day.

  • by ebunga ( 95613 ) on Monday July 29, 2024 @02:40PM (#64664768)

    I think that's what he's saying.

  • by gweihir ( 88907 ) on Monday July 29, 2024 @02:41PM (#64664774)

    This is exactly _not_ what will be happening. AI is incapable and unhelpful for anything a bit advanced or needing a bit of insight. Hence workers on that level that do (mostly) not use it will perform better. On the low-end, AI can actually replace workers to a degree. And in that area, specialized AI (which is already being used, but probably still a few years out for most applications) will replace workers.

    I am coming more and more to the conclusion that "AI researchers" are the most clueless about the effects of AI.

    • by ebunga ( 95613 )

      It's expert systems all over again. Where it takes an expert to figure out what needs to be entered into the system and an expert to interpret its outputs, the sort of expert that would be able to figure out that someone has strep throat because the strep test was positive.

      • It's expert systems all over again.

        First AI hype: fifties and sixties
        First AI winter: seventies
        Second AI hype: eighties
        Second AI winter: nineties and noughties
        Third AI hype: twenty-tens to twenty-twenties
        Third AI winter: as soon as people find out again that the promises cannot be kept...

    • Re:Suuure.... (Score:5, Interesting)

      by TWX ( 665546 ) on Monday July 29, 2024 @03:01PM (#64664834)

      The most AI I've had to interact with has been the sorts of preliminary contact that one makes with a technical assistance center. Cisco's "Sherlock Holmes" immediately springs to mind.

      So far every TAC case I've had to put in has required escalation to a human being. "Sherlock Holmes" is incapable of handling anything basic, if one has multiple contracts from multiple rounds of purchases it gets confused and can't find equipment under-maintenance, and it has trouble with subassemblies covered by warranty of the main chassis but with their own unique service-tags.

      My opinion of AI for even basic production is very low, they operate very poorly. If this is what they think will replace people then they're deluded.

      • by gweihir ( 88907 )

        My opinion of AI for even basic production is very low, they operate very poorly. If this is what they think will replace people then they're deluded.

        Agreed. But I think your use-case is already one that requires minimal insight. There are desk-jobs that do not. Think people asking for some info-leaflet on a contact form or via email, checking a boilerplate tax-return or a request for a refund that is entirely standard. They will not replace all people doing it now, but they may do it with 2 people instead of 10 when AI assists. And, unfortunately, there are a lot of these no-insight-required desk jobs.

        • by Shaitan ( 22585 )

          You brush lightly against where I see this technology being most useful right now.

          Those jobs do require insight... the people asking for those boilerplate things or standard things ask for them with hundreds of request variations or sometimes don't know themselves what they need. I worked in the tech department of an office depot for a stint a long time ago and I'd get people drifting over who'd just walk up to me and say something like:

          "I'm looking for the purple gook, it's sticky, like for paper and not h

          • by gweihir ( 88907 )

            You are mistaken. The scenario you describe does not require insight. It can be done insight, but it can also be done without. Remember that a general LLM has heard it all, even if it does not understand any of it. And that is why this scenario is accessible to chat-AI.

            • by Shaitan ( 22585 )

              1
              : the power or act of seeing into a situation

              When one has 'seen it all' and tells the team where they'll end up if they pursue a certain strategy that is insight. It is no less insight when an LLM decodes the request correctly because it has 'heard it all.'

              Sure the LLM has no ego [self] or awareness which can experience insight but the result is the same for this use case. If here and in our other thread when you say these models aren't advancing, you mean toward some kind of artificial sentient being then

        • And fewer people doing BS jobs will require fewer managers, support, etc. The collapse of the BS sector of the economy will convert these shadow welfare cases into actual welfare cases with much worse outcomes. Oops, even though I'm agreeing with the dragon, I had better don the Nomex..

          • by gweihir ( 88907 )

            I think "shadow welfare" is a good term. But also these people are kept busy and in fear of losing their jobs. This situation is not stable and the current hype AI may indeed make things a lot worse by causing that collapse of the "BS sector of the economy" (another good term).

    • by Shaitan ( 22585 )

      "This is exactly _not_ what will be happening. AI is incapable and unhelpful for anything a bit advanced or needing a bit of insight. Hence workers on that level that do (mostly) not use it will perform better."

      This isn't *entirely* accurate. The AI captures plenty of insights, which isn't surprising because it trained on an internet full of people having insights and people are hardly novel at any level. But the AI's make mistakes which are obvious to humans, especially within specialized domains. This mak

      • by gweihir ( 88907 )

        Each generation brings a massive improvement over the last

        You should not listen to the propaganda they use to sell their crap. The "improvements" are rather minuscule (sometimes steps backwards), and that is to be expected given that this is a pretty old technology.

        • by Shaitan ( 22585 )

          That is not even remotely accurate. I put these models through their paces, not only the big form they release but also locally hosted. The difference between gpt 3.5 and 4x for example is MASSIVE. It can handle far more complicated requests and interactions not to mention a massively expanded context.

          These differences might seem small compared to hype but they are huge. Just look at 4o here and try to tell me gpt 3.5 did nearly as well.

          "I have a cutting board and I'm slicing a turkey. I ask you to turn off

          • by gweihir ( 88907 )

            They are not "huge" by any sane measure. They are cosmetic. What happens is that the LLM providers look at past questions asked and then optimize a few of those manually. Basically no impact on the model overall, but some simplistic benchmarks will look better. And they are scraping the bottom of the barrel there. Pretty soon, general LLMs will get worse due to model collapse or very outdated training data.

            • by Shaitan ( 22585 )

              GPT 3.5 can spit out a text book sort algorithm and talk around the process of something more complicated but it can't give you anything beyond what you could find on stack overflow. Within about 5 minutes it will not only be repeating but contradicting itself. It hallucinates about 30% of what it says.

              GPT 4x can be fed a several thousand line source file, analyze and learn its structure and point you to where to make targeted modifications or can help you first formulate a development plan and then work wi

  • As a person who has a very limited skill in programming as it's not really in my core work functions, I think that AI can be helpful. I was able to take code I had written and get it both cleaned up and improved with features I would not have easily found myself but at the same time, I couldn't get the developers at my company to look into it as they are too busy with their own work. So AI helped me there.

    On the other hand, for a seasoned developer, using AI can become a crutch that they won't be able to
    • by Shaitan ( 22585 )

      And are you suggesting you'd want to go back to a world without calculating systems or where you had to recall all the phone numbers?

      • And are you suggesting you'd want to go back to a world without calculating systems or where you had to recall all the phone numbers?

        Are you suggesting I'm making a suggestion? I'm just stating what I see and what I think might create a problem.

        • by Shaitan ( 22585 )

          "Good for some, terrible for others"
          "what I think might create a problem"

          Right. I'm challenging your take that not needing to do mental arithmetic or remember phone numbers is a problem.

          I remember hearing about schools axing cursive and thinking it was terrible. Then as it came time to homeschool my little nightmare and we decided which battles to pick, we realized that even print writing should get little time these days. Even we as adults who grew up learning both cursive and print and spent our childhood

          • Sometimes a title is just a title. Some of us don't ponder all day on every comma in a post you know.

            I could go into details about how not working your brain can be a problem for long term congnative abilities, which we did when we had to retain information like phone numbers, but I think this Youtube Video [youtube.com] can sum it up better than I ever can.
            • by Shaitan ( 22585 )

              "Sometimes a title is just a title. Some of us don't ponder all day on every comma in a post you know."

              Fair enough. Just making conversation.

              "I could go into details about how not working your brain can be a problem for long term cognitive abilities, which we did when we had to retain information like phone numbers, but I think this YouTube Video [youtube.com] can sum it up better than I ever can."

              I see what you are saying but I think in this world of information overload there are plenty of other things to

              • I understand what you are saying as well and do agree that we should work our minds by solving problems.

                The example of memorizing numbers was more to point things we had to do in the past by necessity much like calculating using our heads or using paper instead of a device. The byproduct was healthier cognative abilities but if we are honest, most people won't fill the void left behind by anything. We should strive for more but I doubt most will and that is why I see AI as a crutch for programmers by tra
  • by OrangeTide ( 124937 ) on Monday July 29, 2024 @02:53PM (#64664812) Homepage Journal

    I can do 10X-100X more work using AI tools than not. Of course the quality of the work is very low, but there's so much of it!

    Low quality content is why I predict AI is going to dominate the advertising industry.

    • Your life is filled with tasks that don't require quality. The meme here is "meetings which could have been an email". You know what? Well if the meeting had transcript enabled in Teams then Co-pilot can summarise it and you get precisely that email you were after.

      • by Shaitan ( 22585 )

        The problem is that when they summarize they give an overall summarization, not note takers condensed form of the essential information. The important details get smudged away alongside the unimportant ones and what you are left with is little better than information you could glean from the meeting invite.

        • Not true. Meeting invites do not normally contain outcomes, the copilot summaries do. Also ... I'm guessing your meeting invites are significantly better than the bullshit I get invited to. I'm happy when they spell the meeting title correctly, usually there's nothing in the invite.

    • by quall ( 1441799 )

      I've found that the quality is very good, or good enough to be massaged. I code a lot quicker.

      AI has essentially replaced googling for common code that is a waste of time to write or look at official documentation for, which let's be honestly, that is like 20% of the development.

      • I rarely find common code to leverage. With the exception of standard libraries that we already leverage (C++ project). A lot of the stuff my team is working on is too obscure to show up on a search, despite being mostly open source.

    • Low quality content is why I predict AI is going to dominate the advertising industry.

      While I agree, the reason AI is going to dominate is that it can do things humans can't do. We're currently at the stage where the algorithms pick which ads to show the user. The next step is to construct ads for the user. A human can't do that fast enough.

      A human can create high quality ads meant to convince people to buy a product, but an AI can run experiments with ads visually and textually customized for indiv

  • Exactly (Score:3, Insightful)

    by backslashdot ( 95548 ) on Monday July 29, 2024 @02:53PM (#64664814)

    A significant number of people tell me they don't want to use AI, usually it's because they haven't tried it, don't know how to use it, or couldn't develop the habit of using it. You can't figure it out or remember to use it, so it sucks.. pure sour grapes in action. A person who doesn't use AI is like someone who can't read. But worse than that, it means you aren't challenging yourself intellectually or taking on complex projects. It's like someone telling me they never had a need for linear algebra or calculus. Well it means you're a dumbass.

    • by TWX ( 665546 )

      and what exactly am I supposed to be using AI for exactly?

      Is AI going to respond to a network equipment outage, or a fiber dig-in where some schmuck from an outside company hit my underground conduit with their auger and tore it all up, or where I need to lifecycle the old datacenter core for the new vxlan system?

      I took calculus in high school and college, I am aware of the nature of the area under the curve and calculating the derivatives for distance, velocity, and acceleration, but this isn't math that I

    • I see a person who can regularly make use of AI in their work as a person who is either doing trivial basic work or work that relies on IP protection. How much use is AI to a programmer or systems administrator? Does it save time to have AI crank out a snippet of code that you need to check over for mistakes and vulnerabilities, when you could've written it yourself in about the same time? The longer the code AI produces, the more likely it is to contain mistakes that totally break it. It's like blindly cop

      • 1. You're saying code review is just as long as code writing? That's provably false.
        2. Every artist learned from prior art didn't they? If they hadn't, art would look like ancient cave paintings.

    • by Tablizer ( 95088 )

      If AI does most the grunt work, I'm all for it! These stupid bloated web-stacks require too much grunt work so they can keep up with the Jonesdashians to match fad checklists, so if AI automates such grunt work, I Welcome Our Big Bootied Bender Overlords! (Our cheap-ass org won't purchase bots, though.)

      > It's like someone telling me they never had a need for linear algebra or calculus

      I took 2 semesters of calculus, and never needed it. Even if I did encounter a need, I'd need a refresher tutorial, and it

    • There actually never was any need for calculus in my software work. At least not on the side of transforming a requirement into code.

      Not sure about linear algebra. But I do not remember a case ... probably I solved a few equations on paper, and used the result in code.

      I did a lot of Geometry, and Graph algorithms, though.

    • There are other reasons not to use AI, but you do you bro.

      I do not use any public models because I do not want my thoughts and thought processes to become public. I am happy being that kind of dumbass. I'll bet that I can do more with AI than you can, so I don't really mind you calling me a dumbass. I know that your insults come from your own insecurities.

  • ""It also means AI won't replace people, but maybe people that use AI will replace people that don't."

    At least in jobs where bosses cannot objectively assess job performance. Perhaps bosses who value "using AI" should be replaced with AI, after all they probably are using AI for performance evaluations anyway.

    AI might let you take a job from another worker, until you get fired for the quality of your work.

  • by nightflameauto ( 6607976 ) on Monday July 29, 2024 @03:01PM (#64664836)

    This right here is AI propaganda. The machines may not have taken over, but they've got spokespeople. Slight re-wording would net you, "Please, just learn to use the tool today that will replace you tomorrow. Don't be concerned about tomorrow. We need to have users today, or our AI won't learn everything it needs to do to replace you."

    Ignore the man behind the curtain. I'm certain he has nothing at all to do with it.

  • What are you, chicken?

    LOL. Real technologies don't need a hard-sell.
  • by BringsApples ( 3418089 ) on Monday July 29, 2024 @03:20PM (#64664894)

    I recently started a new job and I was told that we should all use AI in order to format our emails for us, in order to look more professional. It does do a good job. But for those times that I've used it to write software, wow it's terrible. I tried to have AI sort out my personal problems for me, WOW it was as if it hates me. I tried to use it to do a bank heist and it was literally stupid. But yeah, for email formatting, and getting that professional look.... BANGER!

  • AI *will* replace some jobs, such as: some
    - fast food drive through order takers
    - first-level call center workers
    - taxi drivers (eventually)
    - sports game recap story writers

    AI *will not* replace some jobs, but will make workers more productive, eliminating some people who won't adapt:
    - programmers
    - IT engineers
    - bookkeepers

    There are lots of factors. It depends.

  • AI can't help people find jobs. It will only destroy people by taking their jobs.

    The only reason AI must become a weapon is because of the bloodlust of its inventor. For reasons which may never be explained, human beings have a sexual obsession with destroying each other's jobs.

  • "Customer call center"

    Every non-technical remote customer-facing job will eventually be replaced by AI.

    If you disagree, name the job.

  • With that drive and hard work they can get a job anywhere and succeed in any job! No, capitalism is actually tougher, says medalist Keller:

    I found the real-world pressure much more intimidating and much more difficult to deal with because I went from swimming to having three kids and a wife within a year and so the consequences of not succeeding were very, very real and if I didn’t make a sale or if my manager was ticked off with me, or If I got fired - oh shoot, you have no health insurance. It

  • by WaffleMonster ( 969671 ) on Monday July 29, 2024 @04:16PM (#64665090)

    Recently found out LLMs can do base64..etc.
    From deepseekv2 236b...

    Please decode "gbb znal frpergf"

    "gbb znal frpergf" appears to be encoded using a simple Caesar cipher, where each letter is shifted by a fixed number of positions in the alphabet. The most commonly used shift for this type of cipher is 13, which is also known as ROT13.
     
    Applying ROT13 to "gbb znal frpergf":
     
    - "g" becomes "t"
    - "b" becomes "o"
    - "z" becomes "m"
    - "n" becomes "a"
    - "f" becomes "s"
    - "r" becomes "e"
    - "p" becomes "c"
     
    Decoded message: "too many secrets"

    Please decode "dG9vIG1hbnkgc2VjcmV0cw=="

    The decoded text is "too many secrets".

    Cryptographers must be shaking in their boots...

    • by narcc ( 412956 )

      Be less impressed. A quick google search for "gbb znal frpergf" turns up countless pages talking about Caesar ciphers. The same is true for the Base64 encoded text (you'll find it on github). Try it with something unique.

  • ...which LLMs seem to excel at, I reckon most jobs where LLMs may be useful will speed up a relatively small percentage of workers' daily tasks. The speeding up may not even be that dramatic because they may have to do a lot of proof-reading, fact-checking, & editing to make sure the output is of sufficient quality.
  • I'm kind of a tech outsider who worked in VFX for most of my career. But now I'm trying to transition into a more traditional tech developer role. I've noticed in my interviews that coding assistants are a bit taboo to bring up.

    • by jythie ( 914043 )
      They are being pushed hard in for profit education though, so once that crop of programmers become mangers, expect it to become a requirement.
  • Yeah.. this is the same guy that considers understanding a problem space a 'hack' which corrupts the solution, thus the only way to do things right is throw a bunch of ML at a problem and hope it understands it.
  • Wow, with the knowledge of it already happening, he made such a statement which is already false. I don't think he is capable of having such a job where he would teach others. AI is already replacing human workers, and with the fast advances AI is making, it will replace even more, couple it to a generic robot, which are also advancing very fast, it will certainly replace a lot of jobs.

Bus error -- please leave by the rear door.

Working...