AI CEOs Worry the Government Will Nationalize AI (thenewstack.io) 125
Palantir's CEO was blunt. "If Silicon Valley believes we are going to take away everyone's white-collar job... and you're going to screw the military — if you don't think that's going to lead to the nationalization of our technology, you're retarded..."
And OpenAI's Sam Altman is thinking about the same thing, writes long-time Slashdot reader destinyland: "It has seemed to me for a long time it might be better if building AGI were a government project," Sam Altman publicly mused last week... Altman speculated on the possibility of the government "nationalizing" private AI companies into a public project, admitting more than once he's wondered what would happen next. "I obviously don't know," Altman said — but he added that "I have thought about it, of course" Altman's speculation hedged that "It doesn't seem super likely on the current trajectory. That said, I do think a close partnership between governments and the companies building this technology is super important."
Could powerful AI tools one day slip from the hands of private companies to be controlled by the U.S. government? Fortune magazine's AI editor points out that "many other breakthroughs with big strategic implications — from the Manhattan Project to the space race to early efforts to develop AI — were government-funded and largely government-directed." And Fortune added that last week the Defense Department threatened Anthropic with the Defense Production Act, which allows the president to designate "critical and strategic" goods for which businesses must accept the government's contracts. Fortune speculates this would've been "a sort of soft nationalization of Anthropic's production pipeline". Altman acknowledged Saturday that he'd felt the threat of attempted nationalization "behind a lot of the questions" he'd received when answering questions on X.com.
How exactly will this AI build-out be handled — and how should AI companies be working with the government? In a sprawling ask-me-anything session on X that included other members of OpenAI leadership, one Missouri-based developer even broached an AGI-government scenario directly with OpenAI's Head of National Security Partnerships, Katherine Mulligan. If OpenAI built an AGI — something that even passed its own Turing test for AGI — would that be a case where its government contracts compelled them to grant access to the Defense Department?
"No," Mulligan answered. At our current moment in time, "We control which models we deploy"
The article notes 100 OpenAI employees joined with 856 Google employees in an online letter titled "We Will Not Be Divided" urging their bosses to refuse their models' use in domestic mass surveillance and autonomously killing without human oversight.
But Adafruit's managing director Phillip Torrone (also long-time Slashdot reader ptorrone ) sees analogies to America's atomic bomb-building Manhattan Project, and "what happened when the scientists who built the thing tried to set conditions on how the thing would be used." (The government pressured them to back down, which he compares to the Pentagon's designating Anthropic a "supply chain risk" before offering OpenAI a contract "with the same red lines, just worded differently".)
Ironically, Anthropic CEO Dario Amodei frequently recommends the Pulitzer Prize-winning 1986 book The Making of the Atomic Bomb...
And OpenAI's Sam Altman is thinking about the same thing, writes long-time Slashdot reader destinyland: "It has seemed to me for a long time it might be better if building AGI were a government project," Sam Altman publicly mused last week... Altman speculated on the possibility of the government "nationalizing" private AI companies into a public project, admitting more than once he's wondered what would happen next. "I obviously don't know," Altman said — but he added that "I have thought about it, of course" Altman's speculation hedged that "It doesn't seem super likely on the current trajectory. That said, I do think a close partnership between governments and the companies building this technology is super important."
Could powerful AI tools one day slip from the hands of private companies to be controlled by the U.S. government? Fortune magazine's AI editor points out that "many other breakthroughs with big strategic implications — from the Manhattan Project to the space race to early efforts to develop AI — were government-funded and largely government-directed." And Fortune added that last week the Defense Department threatened Anthropic with the Defense Production Act, which allows the president to designate "critical and strategic" goods for which businesses must accept the government's contracts. Fortune speculates this would've been "a sort of soft nationalization of Anthropic's production pipeline". Altman acknowledged Saturday that he'd felt the threat of attempted nationalization "behind a lot of the questions" he'd received when answering questions on X.com.
How exactly will this AI build-out be handled — and how should AI companies be working with the government? In a sprawling ask-me-anything session on X that included other members of OpenAI leadership, one Missouri-based developer even broached an AGI-government scenario directly with OpenAI's Head of National Security Partnerships, Katherine Mulligan. If OpenAI built an AGI — something that even passed its own Turing test for AGI — would that be a case where its government contracts compelled them to grant access to the Defense Department?
"No," Mulligan answered. At our current moment in time, "We control which models we deploy"
The article notes 100 OpenAI employees joined with 856 Google employees in an online letter titled "We Will Not Be Divided" urging their bosses to refuse their models' use in domestic mass surveillance and autonomously killing without human oversight.
But Adafruit's managing director Phillip Torrone (also long-time Slashdot reader ptorrone ) sees analogies to America's atomic bomb-building Manhattan Project, and "what happened when the scientists who built the thing tried to set conditions on how the thing would be used." (The government pressured them to back down, which he compares to the Pentagon's designating Anthropic a "supply chain risk" before offering OpenAI a contract "with the same red lines, just worded differently".)
Ironically, Anthropic CEO Dario Amodei frequently recommends the Pulitzer Prize-winning 1986 book The Making of the Atomic Bomb...
offensive (Score:1, Troll)
Retarded is a very offensive word. I wish people would stop using it.
Re:offensive (Score:5, Insightful)
Re: offensive (Score:5, Insightful)
This is apparently very offensive to the people who proudly proclaimed "fuck your feelings" once, lol.
Re: (Score:2)
Re: (Score:2)
Admittedly, my big problem is remembering not to refer people in a meeting as "guys" when there are women there. I'm not sure what other pitfalls I would be falling into. I use the term "first nations people". I don't use the 'N' word. I'm very careful not to offend people when I'm talking.
Re: (Score:2)
Re: (Score:1)
It doesn't bother some women, but I don't want to offend the ones that it does. Being offensive is just not what I'm about. But I imagine that is much different between Canadians and Americans. Americans seem to take pride in being offensive, almost like the right to offend people is in the Constitution and any violation is a threat to their freedom.
Re: (Score:2)
I'm originally from the Southern USA, and never lost the word y'all from my vocabulary. Ironically enough, when inclusiveness hit the scene all of a sudden my word was of course the most inclusive. It didn't really catch on though, and I'm sure Southern Republicans don't appreciate the irony.
The "You guys" phrase we use to attribute to the Yankees (northerners). As you point out, it's odd, or maybe even weird, to call a mixed group or a group entirely composed of females "you guys" and yet most the USA stil
Re: offensive (Score:2)
That just reinforces the world view that Americans are so ignorant that they are stuck in bad ways that no longer apply to the world and will never get out.
Re: (Score:2, Informative)
That's not being "rebellious". That's just being immature and self centered.
Re: (Score:1)
A great many of us see it as being independent and self-reliant.
Without trying to being rude, if you are Canadian born and raised, you are use to being part of a commonwealth. You didn't dump the tea in the harbor and tell England to piss off. It's just in our history and there is still a lot pride left. We just need to get our proverbial ducks lined back up. Who knows, maybe when we nationalize AI, AI will take over and FORCE the change's we actually need.
LOL, sorry, yeah that last part won't happen.
Re: offensive (Score:2)
Yeah well we Canadians tend to live for the day and not things that happened in the 1800's.
Re: (Score:1)
Re: (Score:2)
That's why I have resisted all my life to live in the US. It is true that most of your fellow Americans think you are corrupt and therefore drive this attitude that you need to protect yourself. Do yourself a favor and move. It is not like that in most of the world.
Re: (Score:2)
Ha, using "guys" for a mixed gender groups is a stumbling point for me too. It's worth the hassle of change though, I know I wouldn't want to be refered to as a woman or anything else that I'm not.
Re: offensive (Score:2)
Ban one word, and people will find another. You can't ban an idea.
Re: (Score:2)
Really.. what is this other word that is derogatory to autistic people?
Re: offensive (Score:2)
I have a nephew that is severely autistic.
Re: (Score:1)
"Retarded" is actually merely descriptive and a perfectly fine technical term. But somebody that is retarded may not be able to understand that.
Re: (Score:2)
Tardo is the Latin verb that "retarded" comes from. It means: to be slow. When describing someone publically, do you think it is appropriate to describe them as "slow"?
Retarded is not a technical term anymore and is purely descriptive and absolutely pejorative.
As a famous person geezer once said: fuck off.
Re:offensive [free speech] (Score:1)
Retarded is a very offensive word. I wish people would stop using it.
Quoted against the censor trolls, but I actually think the offensive adjective applies to the speaker thereof, both morally and socially. Still think your FP was kind of a waste...
However, as is (too) often the case, it reminds me of a book. This one is called Feeding the Machine by Muldoon, Graham, and Cant and the machines in question are mostly generative AI systems. Still trying to digest their description of the mess we've gotten ourselves into, but one of the sad examples of human abuse involves the
Re:offensive (Score:5, Insightful)
Re: (Score:2, Insightful)
Re: (Score:1)
Re: (Score:3)
Are you saying that the demented Orange one didn't use the autopen to pardon the 1600+ convicted insurrectionists? The Idiot that can't even *stay* on teleprompter, and goes off rambling on subjects utterly unrelated to what he's supposed to be talking about?
Do you like his drapes?
Re: offensive (Score:2)
Yes because those were never associated with real autistic people.
LLMs != AGI and never will (Score:5, Insightful)
Re:LLMs != AGI and never will (Score:5, Insightful)
Re: (Score:2)
Re: LLMs != AGI and never will (Score:3)
Re: (Score:2)
Pretending to be intelligent works when you are dealing with human fools. As soon as you apply such a tool to reality, it gets a completely merciless reality check though. Remember that "vibe coded" social network for AIs, that did not even get basic authentication right?
Re: LLMs != AGI and never will (Score:2)
Re: (Score:2)
Unlikely, unless that junior dev is really stupid. If that person is not, what would happen is that they start asking questions and doing research.
Re: LLMs != AGI and never will (Score:1)
Re: (Score:2)
What makes you so sure? We have basically two really large pools of training data: Written and digitalized text, and videos.
Videos are large and mostly redundant with low information density, whereas text is usually high density and a lot of information content. Both are useful in different ways, but given limited resources text is more valuable right now for "thinking", while we will probably need video for more robotic tasks.
There are good arguments for better architectures than LLM, but these would often
Re: (Score:1)
There is mathematical proof that LLMs cannot do AGI. To a smart person that ends the discussion.
I think you mistake what the average person does for General Intelligence. It is not. If you want regular use of General Intelligence in a human, you need an "independent thinker" (about 10-15% or all people) or at least somebody that can be convinced by rational argument (about 20% of all people, includes the independent thinkers). Merely "thinking" is not enough. You need to do it successfully.
Re: (Score:3)
Do you have a reference to that proof? As LLM are (similar to most other neural nets) general function approximators, it's unlikely they cannot be used to implement AGI, if AGI can be implemented using current computing paradigms.
This doesn't say they are a good architecture, the first to reach it, or whatever, but when people get doom to run on a toaster they still prove that it can run doom even when there are better devices for playing doom.
Re: (Score:3)
Do you have a reference to that proof? As LLM are (similar to most other neural nets) general function approximators, it's unlikely they cannot be used to implement AGI, if AGI can be implemented using current computing paradigms.
My guess is that he's referring to Merrill & Sabharwal's results on circuit complexity. https://aclanthology.org/2022.... [aclanthology.org], https://arxiv.org/abs/2207.007... [arxiv.org] and https://papers.neurips.cc/pape... [neurips.cc], are key results.
However, Merrill & Sabharwal also showed (https://arxiv.org/abs/2310.07923 [arxiv.org]) that adding chain-of-thought capabilities breaks the previous results, so the earlier papers basically only apply to pre-2023 LLMs.
Re: (Score:2)
Thanks, that are interesting links (and will take some time to read). I guess its then more about theory (the could learn it) versus practice (given usual training they are unlikely to learn it)?
And when it comes to AGI I think stuff is muddy anyway. I have no idea if we will reach it and some doubts that we will need it.
It has a few nice thought experiments, but why do we need AGI when we have AI systems that do the same without being AGI?
Re: (Score:2)
And when it comes to AGI I think stuff is muddy anyway. I have no idea if we will reach it and some doubts that we will need it.
It's definitely muddy, since we can't even define what AGI is with any precision. As for whether we'll need it... I mean, we have human intelligence, which we believe to be general, and find that pretty useful, so I don't know why AGI wouldn't be useful, especially if it's a lot smarter than we are.
It has a few nice thought experiments, but why do we need AGI when we have AI systems that do the same without being AGI?
Because they don't do the same without being AGI.
For our own safety as a species I think it's better to keep our tools sub-AGI and lacking in agency, but that doesn't seem to be where we're headed.
Re: (Score:2)
There are newer results as well. And iterating stupidity does not fix this. In fact, reasoning models are even more limited in some aspects and will produce pure hallucinations from a certain complexity onwards.
Re: (Score:2)
You didn't even provide the reference for your original claim of proofs. Now you post again without references. Cite some papers, if you know "newer results"
Re: (Score:1)
There is mathematical proof that LLMs cannot do AGI.
Bullshit.
For that to be true we'd first need a formal definition of what AGI is, and we don't have that, and don't have anything close to it.
My guess is that you're probably referring to the circuit complexity results by Merrill & Sabharwal. But those results specifically address only fixed-depth, fixed-precision transformations which means they don't apply to LLMs with chain-of-thought-augmented inference. That is, the proof doesn't apply to current LLMs, only to LLMs as they existed 3+ years ago.
Re: (Score:3)
Actually, no. This gets proven by showing that LLMs cannot pass boundaries way below what AGI would need to do. No full definition of AGI needed.
But I guess you have no clue about proof theory and are aggressive about your shortcoming. Not uncommon in idiots.
Re: (Score:2)
There are still idiots that think LLMs can deliver AGI? Fascinating. We have solid mathematical proof that this is impossible. And anybody with a working mind (a minority, to be fair) saw it long before.
The LLM approach cannot ever develop insight. (Here "insight" = knowing something and knowing it is reliably true.) Not possible. Statistical models cannot do that unless you run them in a way that is essentially non-statistical. But then they cannot perform anymore at all. And insight is they core ingredien
Re: (Score:2)
And by thins getting modded down, we get some nice, additional proof that some people do not have general intelligence either.
Re: LLMs != AGI and never will (Score:2)
Let governments pay all the bills (Score:5, Interesting)
and Sam and his gang will just start another company using all knowledge learned.
Re:Let governments pay all the bills (Score:5, Interesting)
You've got it. Sam Altman is in a bind. He doesn't have a business plan, and he has a lot of debts and expectations that are coming due soon. He's been talking about getting bailed out by the government since last year, IIRC. He *wants* to be bailed out.
That would, of course, be the worst decision since the 2008 GFC when banksters got bailed out.
Then again, you know who is in charge, so it may happen.
Re: Let governments pay all the bills (Score:2)
Re: Let governments pay all the bills (Score:3)
Are we sure, though? [apnews.com]
In my book betting on who's gonna kill themselves next because of what you do to them is pretty Gulag rock bottom.
Re: (Score:2)
I would also point out that a betting pool on death encourages homicide.
I may have completely lost the context but I feel like this conversation is spanning two or more threads here - maybe one about betting pools on death, one about nationalization of AI, and one about tariff repayments.
It has always been my belief that current "AI" efforts were really about government surveillance and m
Re: (Score:2)
I was speculating on the idea of how "bad" a government that is making all these decisions can be, nothing more.
And I find with an ever-present unease and vague fear that there's no bottom to the badness and therefore no limit to the harm that can be unleashed by the decisions of some kinds of government.
So, I'm officially reclassifying the mantra "the government is always bad and should be pruned at first opportunity" into the "every accusation is a confession" category.
Re: Let governments pay all the bills (Score:3)
Well, he has two thirds of a business plan. He is only missing Phase 2. Once he has Phase 2, he can proceed to Phase 3 : Profit
Re: (Score:3)
Indeed. The "core LLM" scammers are all close to collapse at any time. They need massive influxes of money because they do not produce anything that is even remotely valuable enough to justify the mountains of money they are burning. Google, Microsoft and some others can (maybe) survive the collapse of the LLM hype, but Altmann and OpenAI cannot.
Re: (Score:3)
That would, of course, be the worst decision since the 2008 GFC when banksters got bailed out.
The 2008 bank bailouts actually worked out extremely well. The bailouts were in the form of loans which were repaid on time -- mostly well ahead of time -- and with significant interest. The taxpayers came out well ahead in real dollar terms, even ignoring the question of what might have happened if Bush hadn't bailed them out. And macroeconomists are pretty confident that the result of not bailing them out would have been a depression.
I'm not saying this means government should bail Altman out -- and AF
Re: (Score:2)
That is not what people object to. The government(s around the world)
Re: (Score:2)
All of it could have been done just the same, but in return for 100% equity in the failing banks assigned to the governments. The existing shareholders of failing financial service industries should have lost everything.
I don't think that would have hit the people you want to hit. The shareholders of financial institutions are overwhelmingly not employees of the financial institutions. In 2008, Citigroup was 84% owned by institutional investors, meaning pension funds, mutual funds, insurance companies, etc. And the biggest owners of mutual funds are 401k accounts and IRAs. I couldn't find any numbers on how much financial institution stock is owned by various sorts of retirement accounts, but it's a lot, because retireme
Re: (Score:2)
I think this is the plan. Get the government contracts, then ownership, and make US citizens foot the bill for surveillance that will definitely be used against them. There really isn't anything more they can do with such limited thinking when it comes to algorithm. Tech Bros are trying to make their sci-fi dreams a reality. They are such idiots, maybe skilled in business grifting, those sci-fi futures written about are nightmares and the heroes typically dismantle them.
Re: (Score:2)
Other countries (Score:3)
If the US government would nationalize the US AI companies, I am sure other companies would spring up in other countries. And I suspect the us gov run AI projects would be mired in bureaucracy and incompetence, giving an edge to the competing ones.
Re: (Score:2)
Re: (Score:3)
Why do you think that other countries do not have an edge over current US AI companies?
That one is pretty simple: The AI hype in the US is extreme and mainly irrational.
Duh (Score:3)
The Defense Production Act has been around for a long time, and it is actually actively used for some purpose by every President elected in the past 70+ years. This should not be a surprise.
War always takes precedence (Score:4, Insightful)
But Adafruit's managing director Phillip Torrone (also long-time Slashdot reader ptorrone ) sees analogies to America's atomic bomb-building Manhattan Project, and "what happened when the scientists who built the thing tried to set conditions on how the thing would be used."
The ultimate limit on what you will accept and the arbiter of what you are willing to do always comes down to survival. Warmongers are fearful people who believe that you can only ever be secure by subjugating all potential rivals, and danger becomes their excuse for making the ability to make war more important than anything else.
True security only comes from being part of an order that benefits everyone in it more than it would benefit them from turning it over. Making war more important than everything else also makes it the dominant factor in all things by making it inevitable. Allowing endless war to be the permanent order prevents us from finding a new stasis without it. If we cannot learn to control fear, fear will always control us.
Re: (Score:2)
Indeed. Well said.
What I think we are currently seeing is that the end of the "American Century" will lead into the "Raise of the Middle Powers". At least I hope that will be the outcome or we are all really, really, really screwed. A long-term coalition of middle powers is exactly that: High mutual benefits and generally agreed on rules that everybody in there respects. That is what Carney and others are currently trying to create and I think it may just work.
War, on the other hand, needs one big bully tha
Re: (Score:2)
Good luck defending against an incoming Iranian nuke with a stiff upper lip and macho chest thumping bravery...
Is that you, Netanyahu? Order any children murdered en masse today? Is the Iranian nuke in reality with us?
It's funny ... (Score:2)
... how some people are just now realizing the dangers of nationalization.
"Wait, you mean that my guys might not always be the ones actually running the government???"
Re: (Score:2)
Yep. Many people are too retarded to understand that a country is a community and that a government needs to serve all of the population, or things go to hell. We see a nice example of that currently in the US, but that is by far not the only Kakistocracy on the planet. It is just the most pathetic one, because information is actually available in the US and people could really have known better.
Keep in mind it's not because (Score:3)
Basically we are seeing a fight between oligarchs. Similar to what you used to see in Russia before Putin just started killing them all after taking control.
None of this is good for you or me.
If governments created AI (Score:2)
...we wouldn't have any.
Nationalization would be great (Score:2, Insightful)
You can help change this situation in November
Re: (Score:2)
wah we want to milk the governemnt (Score:2)
We need companies to be punished (Score:2)
All human knowledge destroyed by AI and all human la
FOSS (Score:4, Interesting)
Re: (Score:2)
Indeed. These models are not as "spectacular" though, because they focus on things that LLMs can actually do somewhat well. For example, Apertus (the free Swiss model) has been trained on apparently over 1000 languages and mainly targets translations and chatbots. That is an area where reliability does not need to be 100% (unlike, say, writing code or running LLM "agents" that will both bet subjects of targeted attacks) and hence this model focuses on actual usefulness instead of making grand claims to rake
Re: (Score:2)
Not just that: even if they do nationalize it, people can still fork the FOSS version and build independent private companies around it. Hopefully w/o needing datacenters worldwide to run on, but just running on their own processing farms, w/ themselves paying for the power to run it all
Be careful what you wish for (Score:2)
Some might cheer the nationalization of AI companies. They should be careful what they wish for. I doubt investors would continue to pour unlimited billions into AI if they knew they were going to be run by and for the government. I doubt governments will pour the same billions into the same parts of AI deployment.
If you think AI is dangerous and want to slow it down, that's a feature rather than a bug. If you want rapidly advancing capabilities (to pick a random example, to run fleets of armed, autonomous
Re: (Score:2)
The government money isn't going to come in until after the bubble crashes, i.e. the investors disappear on their own. Then the AI companies will be too big to fail, they'll be national security, and whatever else needed to obtain a good old-fashioned bailout.
Your economy will be wrecked once the cycle's complete, but OpenAI's balance sheet will be made whole.
Re: (Score:2)
Nationalize in this case means a President promoting investment, influencing who runs the company,...
That's the thing. As soon as POTUS gets his fingers into company operations, that's when I, as an investor, start looking for the door. The one thing I'm sure of is Trump won't use his influence to boost shareholder value.
It would be an interesting research project to see whether that's actually happened to Intel, US Steel, and the other companies Trump has demanded stakes of.
Yes, please do that (Score:2)
Then the nations that are not on board can actually continue to have a working economy.
The actual fact of the matter is that LLM-type AI can deliver very little. Not nothing, but not even remotely what currently gets claimed. If you go "all in" on LLM-type AI, not only will your economy collapse because because you cannot provide work for people anymore, it will also collapse because most things will get very unreliable and many things will stop working altogether.
In a related note, I really understand how
Adding human oversight to kill list isnt hard (Score:1)
Adding human oversight step to kill lists / plans during a war is not really that big of an issue.
It will anyway always be there if only to avoid waste of a few 10s of millions of dollars due to some AI hallucination
Re: (Score:1)
Have i posted in the wrong tab - i am not sure
AI CEOs? (Score:2)
I've encountered AI help desk attendants, but wasn't aware that any AI had risen to the exalted position of CEO!
Despite fears of an AI apocalypse, I can't help wondering if an hallucinating LLM would be less dangerous - and less creepy - than an hallucinating Alex Karp.
Build their own! (Score:2)
This is easily solved. They can build their own. Problem solved. I have no doubt that some of the AI vendors would be happy to bid on setting one up.
"why aren't they stopping us" (Score:1)
What it actually sounds like is that they're wondering WHY the government HASN'T DONE THIS YET. the answer is, because it's run by shitty criminals with no vision and no understanding at all.
Wild West vs. Nationalize (Score:2)
Aren't there a few intermediate steps we can talk about? Why are they jumping directly from the current land grab bonanza to scaremongering "Nationalize!!"?
Maybe if we had some well meaning regulation and spell out product liability it would go a long way towards quelling fears and creating some predetermined penalties to encourage compliance.
Currently LLM's spewing convincing but erroneous results is polluting court filings and student papers (among many examples), with no downside consequences for the ven
Why would they bother? (Score:2)
Normally fed R&D programs are aimed at projects that don't have particularly good economic incentives; and state run organizations tend to be clustered in areas where the economics check out but the market incentives are troublesome(like infrastructure and public health) or where there are serious s
Worry? (Score:2)
doesn't matter (Score:1)
Translation (Score:2)
What Sam and cohorts are really thinking is that the AI gravy train is wobbling head-first into disaster, and they want the government to bail them out before the whole thing goes off the rails.
Re: (Score:2)
Given that Palantir is all about government surveillance, I won't say you're wrong. They want the Orange One to buy them out before they collapse, though.
Everything already belongs to the government (Score:2)
They are merely allowing you to use their property.
Re: Screwing the military... (Score:2, Offtopic)
Quite the opposite, dear, you have chosen as your leaders and your mouthpieces people who mock the military, call them losers and traitors and try to swindle them out of pay and benefits.
People like cadet bone spurs, or his vp, or his campaign spokesman Oddjob, the wife-warrior, etc.
Re: Screwing the military... (Score:2)
Keep telling yourself this. How's that little victorious war going, I hear there will be more caskets and veterans soon.
Re: (Score:2)
Wow, you sound like a putin pawn from March 3, 2022.
"That victorious little war, as you call it, is going great! Ukraine's regime has collapsed: they just don't know it as yet. Their president ran away to Warsaw and his army is surrendering en masse. Not only that, nobody in Ukraine wants to take up the post, the country is running without a head. In the meantme, at sea, all the Ukrainian fleet has been sunk. Sound to me like a regime that has already imploded, even if they don't know it as yet"
If you do a
Re: (Score:2)
I don't need a "crystal ball", rightwing nutjob. There are numerous example of very stupid people with a lot of power doing dumb shit.
In this case we have absolute thrash with no education, ability to think critically or even ability to talk coherently with a lot of hardware and people doing the bidding of a minor dictator who must do whatever he can to cling to power or else go to prison.
It is quite obvious what will happen, a fubar and then a cut-n-run.
Re: (Score:2)
It's trash, auto-carrot, not thrash.
Re: (Score:2)
That is a very .... disconnected and romanticized view of reality you have there. Also utterly dysfunctional.
But what I see here is a very small person deeply in fear of anything they do not understand. And that seems to be a lot of things.