Google Cuts Hundreds of Jobs in Engineering and Other Divisions (nytimes.com) 176
Google laid off hundreds of workers in several divisions Wednesday night, seeking to lower expenses as it focuses on artificial intelligence and joining a wave of other companies cutting tech jobs this year. From a report: The Silicon Valley company laid off employees in its core engineering division, as well as those working on the Google Assistant, a voice-operated virtual assistant, and in the hardware division that makes the Pixel phone, Fitbit watches and Nest thermostat, three people with knowledge of the cuts said. Several hundred employees from the company's core engineering organization lost corporate access and received notices that their roles were eliminated, two of the people said.
"We've had to make some difficult decisions about ongoing employment of some Google employees and we regret to inform you that your position is being eliminated," the company told some workers in the division, according to text reviewed by The New York Times. Google confirmed the Assistant cuts, earlier reported by Semafor, and the hardware layoffs. "We're responsibly investing in our company's biggest priorities and the significant opportunities ahead," a Google spokesman said in a statement. After cuts throughout the second half of 2023, "some teams are continuing to make these kinds of organizational changes, which include some role eliminations globally."
"We've had to make some difficult decisions about ongoing employment of some Google employees and we regret to inform you that your position is being eliminated," the company told some workers in the division, according to text reviewed by The New York Times. Google confirmed the Assistant cuts, earlier reported by Semafor, and the hardware layoffs. "We're responsibly investing in our company's biggest priorities and the significant opportunities ahead," a Google spokesman said in a statement. After cuts throughout the second half of 2023, "some teams are continuing to make these kinds of organizational changes, which include some role eliminations globally."
Sinking company throws sailors overboard (Score:2)
Re:Sinking company throws sailors overboard (Score:5, Insightful)
Like in the old joke where the two companies have a rowing competition and one loses by miles. Consultants are called and after 3 months of filling whiteboards, they find the reason: The winning team had a cox and 8 rowers, in the losing boat there were 8 cox and a rower.
The recommendation was to reprimand the rower and to demand higher effort.
In the next competition, the distance was even worse.
So the rower got fired.
stop wasting time (Score:5, Funny)
Does this mean they will stop rearranging the Android UI for no reason?
Re: (Score:3)
Absolutely.
The reason will now be, "This is what our generative AI suggested the UI should look like this week."
Re: (Score:2)
And maybe give me back the about to set my own colors instead of using pastel shades based on my background picture. Why Google thinks my colors should be based on the background color I have no idea. It certainly doesn't work with my background image. I definitely don't want my whole ui pink like it is now. Sigh. Google.
Coders learning to mine coal (Score:5, Funny)
Re: (Score:2)
Lithium is the new coal!
Re: (Score:2)
Re: (Score:3)
I’m sure all the redundant coders will quickly learn how to shovel coal.
Obviously not coal, but yes - if your job is made redundant, consider doing something different. I've done that over my entire career, works pretty well.
Re: (Score:2)
Nothing to see here. Again. (Score:2)
"Google laid off hundreds of workers in several divisions Wednesday night, seeking to lower expenses as it focuses on artificial intelligence..."
Don't you just love it when we read all those stories selling the idea that AI isn't going to have a measurable impact on human employment...as companies are literally laying off humans in order to focus on the very thing that (allegedly) isn't going to impact human employment.
I'd probably be more irritated over the demise of the human race by thy own ignorant hand...if we weren't so damn deserving of it.
Fake AI claims beat admitting failure for CEO (Score:5, Interesting)
"Google laid off hundreds of workers in several divisions Wednesday night, seeking to lower expenses as it focuses on artificial intelligence..."
Don't you just love it when we read all those stories selling the idea that AI isn't going to have a measurable impact on human employment...as companies are literally laying off humans in order to focus on the very thing that (allegedly) isn't going to impact human employment.
I'd probably be more irritated over the demise of the human race by thy own ignorant hand...if we weren't so damn deserving of it.
I don't believe CEOs when they say AI is impacting their company. I think it's a convenient excuse to make corrections and say it's an industry-wide reorientation towards the future...rather than admit that there were strategic mistakes by leadership. Pinchai is leading them to the next great thing!!!!...not correcting his many mistakes!!!...this is a march towards the future, not a reflection of his failing as an overpaid CEO.
Google used to be exciting...10 years ago. Now barely anyone cares. Everyone I knew on Android was excited about their offerings. All those people now have iPhones. We used to be excited about their tablets and they let it die....then there's the worn-out complaint about them killing things at the drop of the hat. We can spin our wheels on the details, but the bottom line is nearly everyone thinks Google is in decline. Some of it is unavoidable...cellphone hardware and displays can only advance so far...most of it is strategic failure from their leadership and it started happening once the new CEO came on board.
I predict many more AI-related job cuts...not because AI is so beneficial, but because it keeps investors happy. In my view, these people were going to be let go regardless, but by spinning AI narratives, they can say they're investing in the future instead of failing strategically in the past. People who actually know what they're doing haven't proclaimed staff cuts due to AI.
I predict AI is harmful and dangerous...not because it will take away real jobs, but because it will enable scams and fraud at an unprecedented scale....just wait till we have political campaigns where all the materials were created by generative AI!!...as well as elaborate phishing sites, crypto scams, etc.
Re: (Score:2)
Can't wait (Score:2)
It would be interesting to see how long it takes for these folks to find another job. After all, for decades all we've heard is employers, particulary in IT, can't find people to fill positions. If folks with this pedegree (one assumes you have to have at least two brain cells to work at Google) aren't hired relatively quickly, it's not that companies can't find people to fill jobs, it's that they're lying.
Perhaps companies should stop using that expensive crappy software to filter applicants and they'd fi
Re: Can't wait (Score:3)
Remember this (Score:4, Insightful)
Re: (Score:2)
For the last three decades, I've lived and worked knowing that this principle is true. As a result, I never let them take my "total loyalty" or free time or health. I gave them the time they were paying me for, not my whole life. When I sign off at suppertime, I'm done, I don't look at my work computer again until the next day. And guess what, this practice hasn't heart my career in the least.
What difficult decisions? (Score:3)
Just the annual reminder (Score:5, Insightful)
Layoffs for AI jobs === simply cleaning house (Score:5, Interesting)
Good IT employees rarely get laid off by healthy companies/orgs, and management is always looking to cut loose employees they want to lose. "AI hiring" is just the latest, more culturally palatable premise to do it under.
If you're laid off from an IT position in a large org (including Google), you've probably done at least four of the following five things:
1) -- The worst thing you can do -- You stayed too long in the same position.
2) You made too much money (probably due to small raises over time while doing said job).
3) You didn't adapt to changes happening in the IT world, both inside and outside your company/group.
4) You pissed off and/or embarrassed someone with actual power in the not-too-distant past.
5) You grew too old (unavoidable).
existential (Score:2)
LLMs are an existential threat to Google's core business of search. Any greater degree of understanding a question will lead to better search results. If Google can't keep up with MS, they are going down.
Re: (Score:2)
Re: Why surprised? (Score:3)
Re: Why surprised? (Score:5, Insightful)
Re: (Score:3)
Sometimes, moving fast and breaking things is not a great idea.
A thousand times this. Yet this is held as almost a religion by some people. Often the move fast and break things ends up reinventing the wheel with a nowhere near as good wheel.
Because it is all based on the idea that people who came before us are stupid and wrong. That idea itself is stupid and wrong.
One of the things I learned early on is that Overnight Sensations are usually decades in the making.
Re: Why surprised? (Score:2)
Since when did moving fast and breaking things describe Google?
Re: (Score:2)
But you do fix the bottom line, because nobody likes their 401K going down the drain, but many times the same folks are against cuts. You do fix even the mental health of the folks being cut, because if you are within one of those projects (and/or you have gotten not stellar performance reviews), you do known that you are hanging on a thin thread.
Someone really needs to investigate the utter fragility of people today. Mental health? I was laid off several times when I first entered the workforce, so were a lot of others. We weren't happy about it, but it didn't destroy us like and adversity does to people today. What do you think? Is needing therapy for what was once just "life" some sort of improvement in humanity?
Folks with good performance reviews have many ways of relocating to other teams/projects within the company. The ones which, for one reason or another, happen to have not been fitting within, are let go.
This is correct. Having been at the same place for over 30 years, I've seen a lot of comings and goings. I survived a fair number of do
Re: (Score:2)
Well, spoken, well said....I've been wondering the same thing.
I guess this is what you get with a generation raised with "safe spaces"
Re: (Score:2)
This is how you get employees to stop taking risks, which isn't a great long-term strategy.
Not really. I don't see a problem with a company that tries five new ideas, lets them run for two or three years, then shuts down those that don't make economic sense.
I can see shutting down the Fitbit and Nest divisions as there are other companies that are doing a better job and that market is stagnating and declining. I thought the Pixel phone division was doing OK but wasn't really tracking their market position. I can also somewhat see the logic of downsizing the Google Assistant if they are going t
Re: Why surprised? (Score:2)
Re: (Score:2)
I agree, keeping good employees is preferable to just laying off entire divisions en-mass. Perhaps they cherry picked the best to transfer to existing teams and let the rest go. Not a lot of supporting information since the NYTimes article is paywalled.
Re: (Score:2)
Re: (Score:2)
I agree, keeping good employees is preferable to just laying off entire divisions en-mass. Perhaps they cherry picked the best to transfer to existing teams and let the rest go. Not a lot of supporting information since the NYTimes article is paywalled.
Yes, not everyone is equal in the workforce. Good employees are kept. Occasionally, you let someone go who you don't want to lose, but any time we did that, we hired them back when and if we could.
I was around all types over the years. Attitude, adaptability, effort, and of course competence are big. I had several co-worker's lacking, some in all four aspects. Guess who went away during a downturn?
Re: (Score:2)
We no longer need coal miners, because we moved to natgas. But it doesn't make sense in terminating coal miners, because their skill are fungible, and there's always a team that needs more headcount.
Real life: no, there isn't a team that needs more coal miners, and their fungibility is irrelevant. IT used to be a unique field largely exempt from the "employees are an investment that need to bring a significant positive return", because IT had infinite money from high risk investors. This is over.
We're past
Re: Why surprised? (Score:2)
Re: (Score:2)
Unless of course we no longer need the coal being mined, so we terminate the miners.
Again, every industry is like this, except for IT in last couple of decades, because of investment capital. Investment capital is now gone. We're like everyone else now.
Re: (Score:2)
We no longer need coal miners, because we moved to natgas. But it doesn't make sense in terminating coal miners, because their skill are fungible, and there's always a team that needs more headcount.
Real life: no, there isn't a team that needs more coal miners, and their fungibility is irrelevant. IT used to be a unique field largely exempt from the "employees are an investment that need to bring a significant positive return", because IT had infinite money from high risk investors. This is over.
True, dat. If I could give one bit of advice for not being terminated and being considered a keeper, it is "be adaptable" We see a lot of people in here and elsewhere who seem to want to do one thing and one thing only during their entire careers.
I've been in the professional workforce since the mid 1970's and at least then to now, everything has a lifetime. My tasks read almost random:
I've been doing low level programming since punchcard days, I've been overseeing a group of tech construction workers,
Re: (Score:2)
It's going to be hilarious if Californian "learn to code" types will have to learn to mine.
Re: (Score:2)
It's going to be hilarious if Californian "learn to code" types will have to learn to mine.
Why? You have a problem with people who work the earth? Or you just have a rageboner about the land of fruit and nuts?
Re: (Score:2)
I can tell you're a hoity toity US West Coast leftie because you don't even know what modern mining in developed countries look like, and assume that these are some "noble savage earth workers" shit.
When in reality, it's an exceedingly difficult expert job that involves operating exceedingly complex machinery and understanding and planning complex chains of explosive clearing. A lot of modern miners don't even get to go into the mine as machinery they operate is deemed so exceedingly dangerous to human life
When you're the dominant player in a market (Score:2)
Re: (Score:2)
Google is just freeing these human resources so that they can be better utilize by the economy at large. Clearly they had some projects that were no longer economically feasible and had to stop burning money on them.
Now all these experienced workers are freed up to find better opportunities with companies that have need of them. With their education and experience, they should be more fit for purpose then a wet behind the ear college student that just finished their degree.
I'm definitely missing how anyone
Re: (Score:2)
Re: (Score:2)
I'm actually more surprised Google still has so many employees, since there are many projects inside which make no sense and are created only to give jobs to a workforce that has been continuously growing w/out rest since 1998.
This is the normal cycle for an healthy company. You hire people, you retain the best, and you purge the bottom stacked.
Here best is not meant in an absolute meaning. Some engineers just happen to not be fitting within the company, even though Google provides a very easy way to relocate among different teams and projects.
This is were many of the projects that makes little sense born, but at the end, a cut needs to be done.
The other part is that Google can't live off search forever. In fact, that time might already be past.
I'd estimate my googleing has dropped by 50% since ChatGPT came out, a lot of the tricky technical answers I can usually wrench out of the LLM.
That's a scary thought for Google, they need some other projects to generate income to fill the gap.
Re: (Score:2)
I stopped using Google for search altogether years ago and I do my best to block pretty much every advertisement on the web. Google is probably going after the non-techies (the other 99%) for their money and most equate search with Google anyway.
I'm quite sure they'll be able to shift "how" searching is done and still be able to capture all this data in order to continue being the advertising company that they are.
Re: Why surprised? (Score:2)
Re:No suprise (Score:5, Insightful)
Bard LLM (or any LLM, really) is going to be 1000% better than whatever pre-LLM technology they were working on for their voice activated assistant technology. That went from "voice assistant is an impossible nut to crack" to "college students build/train better technology in a weekend as a homework assignment" in less than two years.
Except that most people want a voice assistant that doesn't hallucinate random garbage. LLMs are a neat toy, useful for maybe helping with creative writing or other creative processes, but correctness is not their strong point, and that's not really fixable. It's the nature of LLMs. They're a glorified autocomplete algorithm, like tapping the middle button above your iPhone keyboard over and over, only with a bigger training set and better word prediction.
Maybe they might be good for natural language input, figuring out what the person is really asking for, but it needs to go into something more predictable and hard-coded for generating outputs, or else you're going to get some really bizarre behavior, and nobody will understand why it does what it does.
For an example, if an assistant is to integrate with a light switch, you can't have it guessing what the next octet in the IP address should be. You can't have it figure out the network protocol by guessing what the next byte in an XML payload should be.
Could it replace some small parts of assistant functionality, like the input parsing? Probably, but only if it hallucinates a lot less than LLMs currently do. Will it replace the assistant in general? Almost certainly not, or else the assistant will get a lot less useful.
Re: (Score:3)
Except that most people want a voice assistant that doesn't hallucinate random garbage.
2nd-gen LLMs hallucinate much less than 1st-gen. 3rd-gen will be even better. Babies babble, then they grow up.
correctness is not their strong point, and that's not really fixable.
Not true. Progress is happening on many fronts.
LLM-RAGs (RAG=Retrieval Augmented Generation) connect an LLM to a database. Another approach is to combine an LLM with a search engine so it can verify output before sending it to a human.
Re: (Score:2)
What's the difference between creativity and hallucination?
Hallucination is unwanted creativity. Unfortunately LLMs are often desired for their creativity.
Re: (Score:2)
Creativity doesn't mean that all doors are unhinged and nothing matters. If I want a "creative" story about elves and dwarves, it doesn't mean that you can tell me a random string of nonsense without any rhyme or reason. That story still needs "rules", yes, even fantasy worlds need to have rules for the story to be coherent and sensible. A world without rules makes for a very boring story, because every character can do whatever they want, there are no limitations and nothing we can rely on. People don't li
Re: (Score:3)
Creativity doesn't mean that all doors are unhinged and nothing matters.
Post that as a sig, my friend, because it is an insight not understood by many. Creativity is all about working within restrictions. Not about no restrictions - although some would think otherwise. If we take for example art, creatives will work in a genre, be it nudes or hedges.
If I want a "creative" story about elves and dwarves, it doesn't mean that you can tell me a random string of nonsense without any rhyme or reason. That story still needs "rules", yes, even fantasy worlds need to have rules for the story to be coherent and sensible. A world without rules makes for a very boring story, because every character can do whatever they want, there are no limitations and nothing we can rely on.
And there you have one of the huge problems with a lot of modern entertainment. They don't know how to keep within the boundaries and rules. The best example of that is the carnage Amazon wreaked upon the Lord of the Rings "Rings o
Re: (Score:2)
"Subverting expectations" is the second worst narrative tool in JarJar's box right after his unfulfilled mystery boxes. Seriously, why doesn't anyone notice that this guy is a hack that couldn't write a compelling story if his life depended on it?
A surprising twist is no the same as a "subverted expectation". You can thwart expectations, as long as what you do is within the established rules of your narrative universe. If Jean Luc Piccard suddenly pulls out a light saber in the Klingon great hall and slaugh
Re: (Score:2)
Re:No suprise (Score:5, Insightful)
Later-gens LLMs are invariably worse than their predecessors. Simply by what they're being trained on.
First-gen LLMs are by definition learning from human input. Because there is no other. Human input, in general at least, at least if you manage to somehow limit it to relevant sources and don't use bullshit peddlers are a source (I'll get to that), is mostly sensible and topical.
Now first-gen LLM starts generating content. Invariably, some of it will be simply and plainly bad. Really, really bad. Not just factually incorrect but simple garbage. "A hat being sold as a cake" level wrong. Problem is now that the second-gen LLM uses this as part of its input. Because as we've seen, it's incredibly hard even for people (and of course for what passes as AI these days) to differentiate between human and artificial content.
And the portion of AI garbage will increase with every generation, because LLMs are faster than humans at generating content. Much like bullshit peddlers are faster at generating bullshit than people who have knowledge can be at debunking it. The term we use for this is the Gish Gallop [wikipedia.org].
Maybe the term for overwhelming LLM's input with LLM garbage could be called the "LLM gallop".
Re: (Score:2)
Later-gens LLMs are invariably worse than their predecessors.
No, they aren't. The threat of "self-consuming training" is well-recognized and 2nd generation LLMs are given training sets designed to avoid the problem.
Re: (Score:3)
Weird that we cannot create such sets when it comes to scientific papers and doctorate theses...
Re: (Score:2)
The strong LLMs come from companies, and I'm sure they save their training sets and could retrain with data that existed before any poisoning of the well. (Assuming courts don't eventually force them to delete this data.) Presumably, they will consider the trade-offs of using newer data versus the older data along with architectural and training technique advancements to get better LLMs.
So no, the LLMs hav
Re: (Score:2)
Later gen LLMs are vastly better in terms of training materials, as problem you describe is well understood and actively mitigated against. And specialist LLMs are trained on specialist hand picked materials. I.e. the recent whining from union of pediatrics that "chatgpt has low accuracy in our field" is about first gen general purpose LLM. Specialist next gen ones in development for those sorts of things are trained on hand picked set of pediatric diagnoses that are correct and incorrect (with right and wr
Re: (Score:2)
The problem is when pressed for details that are not present in the "blurry jpeg of the web" -- or whatever dataset they have been trained on, but there are new papers showing that being trained on specialist data produces worse result than when trained on the general data -- so, when pressed for details not in the training data, the token predictior slips into hallucinating. And since you don't know when that happens, ANY portion of the response could have been hallucinated.
Interestingly enough, research h
Re: (Score:2)
Me in second sentence:
>And specialist LLMs are trained on specialist hand picked materials.
You replying to it.
>The problem is when pressed for details that are not present in the "blurry jpeg of the web"
Attention span of one sentence or less. Ouch.
Re: (Score:2)
The sentence you quoted continues with, "or whatever dataset they have been trained on". Less than one sentence attention span indeed.
Re: (Score:2)
I could've quoted the entire statement, as it built on the same point of "but learning material is shit, so output will be shit". Which is the exact thing I attacked in my post.
I suppose it's mea culpa for not copy pasting your entire post, giving you this inane angle of attack. You'd think I'd learn that most people with emotional investment in not understanding the point will make great effort to miss the point at all costs.
But I'm too stupidly optimistic for my own good at times.
Re: (Score:2)
Later gen LLMs are vastly better in terms of training materials, as problem you describe is well understood and actively mitigated against.
This is great that AI can be trained to only present truth. And that all versions will present the truth, and nothing but the truth, without bias of any sort. This is the greatest leap for humanity since the last buzzword brainstorm.
And specialist LLMs are trained on specialist hand picked materials.
I see - who picks the hand picked materials? And how is AI prevented from being used by other AI to present a different view?
Good examples are things like Lysenkoism versus actual biology, Right wing trickle down theory versus human psychology, or left wing denial of biology - ie, human women are identical to human men. People still believe in wrong things, and will continue to believe in them, and AI can easily be utilized to swamp the Intertoobz.
And if we have humans hand pick what is allowed or not allowed, well, refer to my last paragraph.
Re: (Score:2)
Most of "knowledge work" is application of existing and present knowledge (which has nothing to do with "truth").
Knowledge is picked just as it has been for centuries. By experts in each field. Experts are often wrong, and we do stupid shit for centuries because of it. Leech treatments in medicine for example. This is why we learn from knowledge, rather than "truth". The only people who learn from "truth" are religious scholars.
Re: (Score:2)
Most of "knowledge work" is application of existing and present knowledge (which has nothing to do with "truth").
Knowledge is picked just as it has been for centuries. By experts in each field. Experts are often wrong, and we do stupid shit for centuries because of it. Leech treatments in medicine for example. This is why we learn from knowledge, rather than "truth". The only people who learn from "truth" are religious scholars.
Truth has nothing to do with religious scholars.
That's mistaking science for truth. What is "correct" today may or may not be correct tomorrow. It's just how science works. It's why standard people loves them some headlines of "Scientists are stunned by.. and "This discovery changes the laws of physics forever!" That's why it is the exact same thing as "Pennsylvania housewife discovers a trick no one knows about.
Yes, we are often wrong. And apparently what the media and most others think is stunned -
Re: (Score:2)
Science does not seek truth. Religious scholars do. Scientists are utterly agnostic on truth.
Scientists seek more observably correct knowledge.
The funniest part is that the only religious tenets you even can project your personal views on truth seeking are Abrahamic, because that is what "your truth" is on religion. It never even crossed your mind that Abrahamic religions are fundamentally quite poor at truth seeking, and are much better at motivating, which is why modern scientific process is born directly
Re: (Score:3)
>Unfounded belief systems that mostly require faith without proof. Even enlightenment.
By this logic, science is also a religion, as science requires unfounded belief that human as an observer exists and is reliable.
This is the problem with people who are raised on religious philosophy, but never actually get to think it through. So all they do is projection like this:
>Which facts does your religion have that others do not? Which deity is the real one?
This is a very common dodge of someone raised into
Re: (Score:2)
Not just factually incorrect but simple garbage. "A hat being sold as a cake" level wrong.
Join us next week to watch contestants win fabulous gifts and prizes on Is.. It.. LLMs!
Re: (Score:2)
And the portion of AI garbage will increase with every generation, because LLMs are faster than humans at generating content. Much like bullshit peddlers are faster at generating bullshit than people who have knowledge can be at debunking it.
A positive feedback loop, overwhelming actual input by humans. Truth and Lies molded into one, and yes, a Gish Gallop that in AI form is impossible to refute.
With enough AI generated crap, any refutation by truth will be well below the "noise floor".
Re: (Score:2)
Later-gens LLMs are invariably worse than their predecessors. Simply by what they're being trained on.
First-gen LLMs are by definition learning from human input. Because there is no other. Human input, in general at least, at least if you manage to somehow limit it to relevant sources and don't use bullshit peddlers are a source (I'll get to that), is mostly sensible and topical.
Now first-gen LLM starts generating content. Invariably, some of it will be simply and plainly bad. Really, really bad. Not just factually incorrect but simple garbage. "A hat being sold as a cake" level wrong. Problem is now that the second-gen LLM uses this as part of its input. Because as we've seen, it's incredibly hard even for people (and of course for what passes as AI these days) to differentiate between human and artificial content.
And the portion of AI garbage will increase with every generation, because LLMs are faster than humans at generating content. Much like bullshit peddlers are faster at generating bullshit than people who have knowledge can be at debunking it. The term we use for this is the Gish Gallop [wikipedia.org].
Maybe the term for overwhelming LLM's input with LLM garbage could be called the "LLM gallop".
That's only if you feed 2nd gen LLMs unfiltered 1st gen LLM output.
I'd say the volume of good input is increasing since LLMs help people generate better content. Of course, the auto-generation means there's also a lot more garbage to weed through.
So you need to do a bit more curation of the training set, maybe you can even train models to help, but I don't see it as an unsolvable problem, especially since you have previous pre-LLM training data to augment.
In fact, the income stream might be high enough that
Re: (Score:2)
Except that most people want a voice assistant that doesn't hallucinate random garbage.
2nd-gen LLMs hallucinate much less than 1st-gen. 3rd-gen will be even better. Babies babble, then they grow up.
And before too long, AI will be in a positive feedback loop with AI, and positive feedback loops always work out great.
Re: (Score:2)
You are assuming that an LLM-RAG is an LLM with a DB bolted on as a post-processor.
That's not how it works. The LLM and DB are fully integrated.
Re: (Score:2)
I don't know, my Echo device has gotten consistently worse over the years. Alexa can barely understand most of my requests and I'd almost prefer it hallucinate something than say "Sorry, I'm just not sure" one more time.
Re: (Score:2)
Re: (Score:2)
I'm often amused when I hear the way my friends text using speech or commands to their personal assistants, "open paren-capital E-small-c-h-o-close paren-full stop-happy face"
you're in for a treat: https://youtu.be/8SkdfdXWYaI?t... [youtu.be]
Re: (Score:2)
Except that most people want a voice assistant that doesn't hallucinate random garbage. LLMs are a neat toy, useful for maybe helping with creative writing or other creative processes, but correctness is not their strong point, and that's not really fixable. It's the nature of LLMs.
This is true that currently (and likely into the future) LLMs will have problems with hallucination and also less obvious inaccuracy.
However, this does not render LLMs totally useless. For example, Google searches are often irrelevant, sometimes inaccurate, and once in a while totally wrong. However, lots of people still use Google searches and find them to be useful. The key is that people use the search results as an input into the human thought process that then arrives at the actionable result.
This t
Re: (Score:2)
Bard LLM (or any LLM, really) is going to be 1000% better than whatever pre-LLM technology they were working on for their voice activated assistant technology. That went from "voice assistant is an impossible nut to crack" to "college students build/train better technology in a weekend as a homework assignment" in less than two years.
That's what happens when you make technical decisions based on Disney movies and pop culture. If they think those college kids can do so much better, have they tried employing pre-school kids? It'll be fine.
Re: (Score:2)
Having any positive effect on society by providing jobs
"Providing jobs" is not a positive effect. The purpose of employment is the production of goods and services, not "keeping people busy". Productivity and living standards rise when a company produces more with less labor, freeing up workers for other productive employment. "Make work" jobs are not good for the economy.
Re: (Score:2, Informative)
People with no jobs are worse for the economy.
Re: (Score:2)
People with no jobs are worse for the economy.
The unemployment rate in Silicon Valley is 3.7% and even lower for techs.
Economists consider 5% to be full employment since some people are always between jobs and in the process of interviewing.
Re: (Score:3)
That is not a surprise. Anybody without a job has to move out of the Silicon Valley fast. Duh.
Re: (Score:2, Informative)
Well hold on there, let's put on our thinking caps and add some context into these spooky scary numbers we are throwing about.
Firstly, the LFPR today is 62.8 with a U3 of 3.9%. The US historical high has been 67.2 in 2001, so a 4.4 drop in 20 years. Now is this bad? Well let's add some context to that time. What was happening in 2001 with a U3 rate of 5.5%?
Well let's look at what prime working age means and how that lines up with 2001. It lines up with the Baby boomers hitting their mid 50's so a portio
Re: (Score:2)
Depends how you count someone as "unemployed"...
Are they unemployed because they don't have a job, or are they unemployed because they need a job but can't find one?
Re: (Score:2)
Then break a lot of windows. So much work for people who have to make new window glass and install it. Perfect economy hack.
Re: (Score:2)
Re:Pure greed at work (Score:5, Insightful)
Sorry to say it, but "creating jobs" is the necessary evil for companies. That's why I always laugh when someone claims that we need our economy to be unfettered because they are "creating jobs".
BULL
SHIT
I create jobs. When I go to a carpenter and order a table, and his order books are full to the brim and he can't make it, he has to hire someone to do the work he can't do alone. When I get a haircut and they are already booked solid, they need to hire someone to cut my hair if they want my money. Nobody, in the history of businesses, has ever hired someone out of the goodness of their heart or because they felt lonely in the empty store. They hired someone because they were forced to by the amount of business they would otherwise have lost.
Who creates a job is you, me and everyone else who goes and buys stuff. This of course is something we can only do if we have money. If you concentrate money at the top 1%, that system breaks down. Nobody buying stuff means nobody needs employees to do the work, means nobody gets a wage, means nobody is buying stuff.
In case you were wondering why the economy is in the dump that it is.
Re: (Score:2)
This of course is something we can only do if we have money.
so ... if we have a job, right?
Re: (Score:2)
Pretty much, yes. Unless we start to pay people for breathing and metabolizing, I guess that will be the only way they can do their job in our capitalist system: Be the demand side.
Re: (Score:2, Insightful)
Actually the way it works is if there is some aggregate demand for a carpenter or a cook or a hair dresser, someone may decide to put their money/effort into opening a business that you may then frequent. If the person is wrong about the aggregate demand (ability to sell, which means the need and ability of the population in the area to pay for the service) then they will lose their business.
I am not sure why people talk about 'goodness of their heart' when it comes to hiring people, is *anyone* actually e
Re: (Score:2)
Producing goods you cannot sell is, for a lack of a better word, stupid. You don't get rich by producing, you get rich by selling. Producing only makes you poor, it ties up liquid assets in nonliquid goods. The housing (and all other "money parking") bubbles is pretty much a logical consequence of the lack of a way to sell your goods.
People who hold capital would of course want to invest it. For good reason, I mean, what else would I do with that money? Stashing it under my pillow is not going to do me any
Re: (Score:2)
Having any positive effect on society by providing jobs is not even on the radar anymore for these companies. They need to die.
Companies don't exist to be make work programs. They never were, there's nothing "anymore" about this. If you want a society relevant program you need governments, like the Chinese who employ people to plant and trim trees just to keep the unemployment numbers down.
No one is obligated to give you a job if they don't have anything for you to do or are not able to fund you. Making money isn't "greed" it is literally the purpose of every business that isn't registered as a not profit.
Re: (Score:2)
Having any positive effect on society by providing jobs is not even on the radar anymore for these companies. They need to die.
Worked out pretty well in Soviet Russia. Everyone had a job provided to them.
Re: (Score:2)
The key between those things is to have the safety net and social services and institutions to where if people lose or want to change jobs their life isn't potentially upended or they have to suddenly dig through savings just to pay bills and survive until the next job.
In Germany for example, as I understand it if you are unemployed then you are automatically enrolled into public healthcare and UE insurance is mandatory so you will be covered financially until you secure new work. Add onto things like chil
Re: (Score:2)
Pretty sure he's also to blame for the horrible weather...
Re: (Score:3)
Joe Biden convinced Google to stop developing products that no one wants? Well a big THANK YOU from me!
Re: (Score:2)
Seems a little reductionist. Or moronic. Or both.
Can't tell. Tell me more about your team's plan to force Google to retain its employees while navigating a major disruptive technological event.
Or expand your imagination and get a dose of perspective, because it seems you have a dartboard and not a brain.
Re: (Score:2)
Wait are you saying you want Joe Biden to directly control the Fed interest rates?
You must be some sort of super-Democrat, even I wouldn't want that!
Re: (Score:3)
MBAs think they can work in any field because it doesn't matter whether they know anything about the company they allegedly lead. They could as well be replaced by a magic-8-ball without any loss of decision quality.
Hasn't anyone wondered why the only job LLM could instantly do without any loss of quality is also the only job nobody tried to use LLM for?
Re: (Score:2)
https://www.wsj.com/articles/i... [wsj.com]
https://www.itworldcanada.com/... [itworldcanada.com]
https://itmanager.substack.com... [substack.com]
Re: (Score:2)