Does Canada Need Nationalized, Public AI? (schneier.com) 108
While AI CEOs worry governments might nationalize AI, others are advocating for something similar. Canadian security professional Bruce Schneier and Harvard data scientist Nathan Sanders published this call to action in Canada's most widely-read newspaper (with a readership over 6 million): "Canada Needs Nationalized, Public AI."
While there are Canadian AI companies, they remain for-profit enterprises, their interests not necessarily aligned with our collective good. The only real alternative is to be bold and invest in a wholly Canadian public AI: an AI model built and funded by Canada for Canadians, as public infrastructure. This would give Canadians access to the myriad of benefits from AI without having to depend on the U.S. or other countries. It would mean Canadian universities and public agencies building and operating AI models optimized not for global scale and corporate profit, but for practical use by Canadians...
We are already on our way to having AI become an inextricable part of society. To ensure stability and prosperity for this country, Canadian users and developers must be able to turn to AI models built, controlled, and operated publicly in Canada instead of building on corporate platforms, American or otherwise... [Switzerland's funding of a public AI model, Apertus] represents precisely the paradigm shift Canada should embrace: AI as public infrastructure, like systems for transportation, water, or electricity, rather than private commodity... Public AI systems can incorporate mechanisms for genuine public input and democratic oversight on critical ethical questions: how to handle copyrighted works in training data, how to mitigate bias, how to distribute access when demand outstrips capacity, and how to license use for sensitive applications like policing or medicine...
Canada already has many of the building blocks for public AI. The country has world-class AI research institutions, including the Vector Institute, Mila, and CIFAR, which pioneered much of the deep learning revolution. Canada's $2-billion Sovereign AI Compute Strategy provides substantial funding. What's needed now is a reorientation away from viewing this as an opportunity to attract private capital, and toward a fully open public AI model.
Long-time Slashdot reader sinij has a different opinion. "To me, this sounds dystopian, because I can also imagine AI declining your permits, renewal of license, or medication due to misalignment or 'greater good' reasons."
But the Schneier/Sanders essays argues this creates "an alternative ownership structure for AI technology" that is allocating decision-making authority and value "to national public institutions rather than foreign corporations."
We are already on our way to having AI become an inextricable part of society. To ensure stability and prosperity for this country, Canadian users and developers must be able to turn to AI models built, controlled, and operated publicly in Canada instead of building on corporate platforms, American or otherwise... [Switzerland's funding of a public AI model, Apertus] represents precisely the paradigm shift Canada should embrace: AI as public infrastructure, like systems for transportation, water, or electricity, rather than private commodity... Public AI systems can incorporate mechanisms for genuine public input and democratic oversight on critical ethical questions: how to handle copyrighted works in training data, how to mitigate bias, how to distribute access when demand outstrips capacity, and how to license use for sensitive applications like policing or medicine...
Canada already has many of the building blocks for public AI. The country has world-class AI research institutions, including the Vector Institute, Mila, and CIFAR, which pioneered much of the deep learning revolution. Canada's $2-billion Sovereign AI Compute Strategy provides substantial funding. What's needed now is a reorientation away from viewing this as an opportunity to attract private capital, and toward a fully open public AI model.
Long-time Slashdot reader sinij has a different opinion. "To me, this sounds dystopian, because I can also imagine AI declining your permits, renewal of license, or medication due to misalignment or 'greater good' reasons."
But the Schneier/Sanders essays argues this creates "an alternative ownership structure for AI technology" that is allocating decision-making authority and value "to national public institutions rather than foreign corporations."
Build it and they will spend money (Score:4, Interesting)
Like education and energy (electricity), the government providing a uniform service, reduces the burden on businesses. But the random quality and massive cost of current sentence generators does not make this a uniform experience for any person or entity.
This is a 'solution' looking for a problem and some people are demanding the taxpayer pay the bill.
Re: (Score:2)
I think the logic behind this kind of shit is "The AI hypers are right! Every person who uses AI is suddenly 10X as productive! Mass unemployment is around the corner and the only solution is nationalization so the government can hand out money and nobody needs a job any more!"
The good news is that Sam Altman, Peter Theil, et al, may well have burnt their political will pushing this shit as nobody, even gullible politicians and journalists, are going to trust them again when the shit finally hits the fan an
Re: (Score:3)
Re: (Score:3)
It's not the technology I fear. It's the insane megalomaniacs and psychopaths that will own the AI and most certainly will use it for their benefit and not society. It's easy to be stary eyes in your early 20s because you are naive about just how rotten people are.
The technologies we are creating *could* liberate us all from work and give us all paradise. Of course, why would the top .01% keep humans around when AI robots can literally do everything they need? It's a serious question. If they don't release
Re: (Score:1)
Re: (Score:2)
Sure, but are they "to big to fail" and I'll have to bail them out anyway? Seems to be par for the course in the USA. Citizens get to bail out big business and those same businesses continue to make record profits.
I mean, it does make sense when you realize USA is really just a big profit-driven business and not actually a governing body that cares for its citizens.
Re: (Score:2)
Re: (Score:3)
It is not sentence generation or regurgitation.
But that's the problem, it is regurgitation, even if a targeted one from a vast array of stomach contents. You've been fooled by a stochastic parrot. Is it a breakthrough in human/computer interface if the human's input has a high likelihood of coming out mangled for reasons we can't understand or automatically correct? Is it really conversing with humans by spewing out a statistically likely response it has no real understanding of, or is it just running the latest successor to ELIZA?
With AI's ability to p
Re: Build it and they will spend money (Score:2)
That's rediculous. A neural net does not have a guaranteed result given a certain input. Anything you do to improve it can easily break it. Such an algorithm is dangerous to use in production. Why wouldn't you just take the time to write a parallel algorithm that works and can be altered in predictable ways?
Does anyone need AI? (Score:1)
Forget the "Nationalized, public" part - does anyone really need AI?
This kind of initiative smacks of creating demand for AI services without ever saying what those services are for.
AI fakes automation (Score:1)
That's just B.S.
You cannot automate stuff with AI.
It's just a bullshit generator that looks a bit more believable than other B.S. generators, just enough to pass some demo sessions.
Re: AI fakes automation (Score:1)
Re: (Score:1)
Look good during the demo sessions, then you need five bible-sized manuals to learn how to construct a request to produce a desired output.
Which, when you think about it... if it produces the desired output only if you create your request to be leading enough, aren't you just manipulating the output, instead of getting a much better, more "natural" output from a text prediction engine?
You'd be better off searching for the information and writing the automation script yourself.
Re: (Score:2)
you can, but you still have to sign under all the made up stuff.
You can if you don't care landing in prison as a consequence.
Re: (Score:2)
And thusly the massive administrative overhead in the American healthcare system is a regulatory and legal problem, not a technological one.
Like everything related to healthcare spending the US stands alone as the most expensive system, even just comparing to Canada; The gap between U.S. and Canadian spending on health care administration has grown to $752 per capita. [nejm.org]
I am sure AI can help reduce that but I personally am not in favor of tacking on yet another expensive process to reduce a different process b
Re: (Score:3)
> This is not a good use of medical professional's time, if you can cut that in half that would be massive efficiency gain.
But AI doesn't mean you can do this reliably, and it's already unreliable enough in a field where reliability means life or death, so do we really want to add a hallucination machine to it?
No, we don't.
Thus far the only applications I've found for AI that's an improvement on the are:
- Search engine prompts (you have to ignore though the slop it spews and go to the sources it links t
Re: (Score:1)
It's already really good for NPCs in videogames. It's such a limited context window and hallucinations aren't a concern.
Re: (Score:2)
Well, you could just as easily have the AI transcribe the entire encounter with the doctor and afterward, the doctor could review the summary to ensure it matches what was just discussed with the patient. Seems like a decent use of technology that could add some efficiency to the process. Possibly.
Re: (Score:2)
The downside is that once system is automated with AI, it is not typically designed for manual review and/or intervention. So AI mistakes tend to be hard to fix, because there is no built-in mechanism to trigger manual review of the results.
Hard to fix, hard to catch, hard to predict, often hard to understand in retrospect. What could possibly go wrong? Let's trust it with our medical paperwork!
Re: (Score:1)
Or better, "Would we be dumping tons of tax dollars into something that's about to go bust?"
Perhaps the whole thing should resolve down to, "is it better to continue letting private funding take the lead and the risk while government waits and watches, or should government jump in now and put public funds at risk, in the hopes that something indefinably better may emerge?"
Provide it as a baseline, as long as is't mandated (Score:4, Interesting)
A Canadian AI (or a national/multicountry AI) would work as a nice baseline for everyone in the country.
Instead of having hamstrung gratis AI that can be withdrawn at a moments notice, you get fully fledged Gratis AI that depends solely on your country. If then you (or your company) wan to pay for some other AI, so be it.
From a soveringty point of view is cool, as you are not beholden to for profit companies, be them national or foreign.
And please remmeber that AI ius much more than LLMs and other Gen AI. Slef-Driving Cars, Platoon Driving trucks, (semi)Autonomus androids/robots, and context aware industrial machine tools/robots need AI too.
I hope canada is contemplating these cases as well, and not fixated on LLMs
Re:Provide it as a baseline, as long as is't manda (Score:5, Funny)
A Canadian AI
Eh, I?
Re: Provide it as a baseline, as long as is't mand (Score:1)
Re: (Score:2)
Free unbiased and privacy (Score:2)
Re: (Score:1)
But will the training data of a nationalized AI be unbiased?
Funny enough, for current AI most bias are (system-level) prompts. Use Grok via API and it is fine, use it via Twitter and it hails Musk. Download DeepSeek weights and discuss Chinese war crimes, use DeepSeek via their website and it declines to answer. Model-level alignment isn't easy without lobotomizing the model, but alignment via prompt works well enough to prevent Chinese people from asking about what their government did.
Re: (Score:2)
But will the training data of a nationalized AI be unbiased?
At the very minimum, it will not be biased toward the interests of Canada's rivals, which makes it usable in applications where there the absence of foreign interference is a criterion, such as for school, government and defence.
If you're a simple citizen it's a different scenario. Your experiments tend to demonstrate the biases weren't introduced at the level of the training dataset, but there is no guarantee this remains the same in the future.
Re: (Score:2)
At the very minimum, it will not be biased toward the interests of Canada's rivals.
Are you sure of that? How in the world would you measure and audit that?
IMHO, bias is in the eye of the beholder. What one person might call biased another might call obviously correct. Forget AI, every human is biased to every other human. When I talk to my 20-something daughter, she (lovingly) thinks I'm a biased anarcho-capitalist-libertarian, I (lovingly) think she's an indoctrinated progressive feminist wackadoodle.
Re: (Score:2)
I mean the model owner controls the bias related to the selection of the dataset. I have to assume that OpenAI/DeepSeek/... will select whatever dataset will make their government happy, for example do not include books that present a version of History that isn't popular around them. In the context is "nationalized, public AI", a Canadian agency would perform the training, to be used by other Canadian agencies (and citizen). It could be that the resulting model still is biased, but at least the Canadian go
Re: (Score:2)
I mean the model owner controls the bias related to the selection of the dataset... It could be that the resulting model still is biased, but at least the Canadian government users would be free from the voluntary effort of foreign governments in biasing their models.
No doubt. And hopefully the Canadian government is less heavy-handed than other governments. I wouldn't assume that's the case but maybe it will be.
What I'm most comfortable with is there being lots of models, created by lots of organizations, funded by a plethora of sources, produced in different societies, and freely (as in speech) available to all humans. That way, if one is too biased, people will ignore it in droves.
Re: (Score:2)
IMO for anything nationalized in terms of software should be open source, and that is all of it including the training data and methods, prompts, etc. Maybe this becomes political football but we should all be able to observe the game taking place, that should minimize the amount of fuckery.
As for the training data use what the government has access to legally, I would assume that is the Library of Congress and similar materials, there isn't the pursuit of profit to motivate the system to train off of ever
Re: (Score:1)
What do you need LLM-AI in schools for?
So students don't even need to learn to read a textbook? So the LLM-AI can write their essay? So the LLM-AI can graduate in place of the kid?
What kind of broken, dumbed-down learning system doesn't even teach the kids anymore and just hands them laptops loaded with LLM-AI instead of textbooks?
When I was in school, the students who knew programming were the ones in the computer classes, the rest of us only learned programming if we studied it in our free-time.
It'll be
Point of information (Score:2)
Bruce is from NY. He isnâ(TM)t Canadian.
Though he is very polite.
Re: (Score:2)
He's also a fool when it comes to politics. People should want government writing rules and picking winners for AI just as much as for social and news media: not picking winners at all, and setting as few and as narrow rules as possible. Do you want the default (or only) AI service to run like the DMV?
Betteridge's Law continues to hold true.
Re: (Score:2)
Do you want the default (or only) AI service to run like the DMV?
Although governments all over the world are criticized by their citizens - and often rightly so - it's probably a good idea to recognize that the experience of bureaucracies from one country / jurisdiction to another can vary greatly.
At least here in Ontario, our Ministry of Transportation is pretty efficient, fair, and convenient. I think an AI service run like our MoT would probably be at least OK, and maybe even much better than that. And that's in a province whose premier is firmly in the pockets of dev
Wait a minute... (Score:1)
Fucking no (Score:2, Informative)
I'm not saying this for Canada, I'm saying this in general.
People follow AIs like sheep following a shepherd to food. A state media AI is not a good idea, unless you really really want fascism. Once people get used to listening to it they will never, ever stop.
AI + MAID = Soylent Green (Score:1)
Re: AI + MAID = Soylent Green (Score:2)
If they wanted to use MAID to save treatment costs, they wouldn't refuse half of the applicants.
Re: (Score:1)
Re: (Score:2)
Or there were just that many people suffering before but now you know about them.
Re: (Score:1)
Re: (Score:2)
There are a lot of people out there suffering. It's probably way higher in the US.
Re: (Score:2)
And the overwhelming majority still get up in the morning and go on living.
Re: (Score:2)
But it's not for you to say they should keep on suffering, because you have no idea what they are going through. Life is truly shitty. If I didn't have people who would miss me I would leave myself.
Re: (Score:2)
Oh, and just to be clear - your life is not truly shitty. You have people who
Re: AI + MAID = Soylent Green (Score:2)
You act like there is a drug for all ills. Most psychiatric meds only work by turning you into a zombie. So you advocate that people should zombify themselves with drugs and live that way?
Re: AI + MAID = Soylent Green (Score:2)
The waiting time "to see a shrink" where I am is two years for even the most serious cases. The fact that a person is su8cidal is sadly not enough to move you ahead in the queue.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Because I know someone who became totally non-functional due to her psychosis and she had to wait two years. Sure, I suppose people with serious mental issues should just use ChatGPT right?
Re: (Score:2)
Back in 2001 I was in a particularly bad spot. My therapist happened to also work at the local hospital's psych ward. I told him I thought I needed to be admitted, and he offered to take me over and check me in. I said, "how about tomorrow, I have to go perform on the radio tonight" (used to be a musician). I was in the next day. I was broke and had no ins
Re: (Score:2)
But that psych ward will only keep you two days for observation and then let you go. They may give you a prescription got zombie meds and send you on your way. No hospital in the US has a way to keep people until they are better unless those price are wealthy and for it.
Re: (Score:2)
Zombie drugs are only given when needed, and they don't just put people on the hea
Re: (Score:2)
I think things are very different in the US as opposed to Canada. American thinking is that a person is only worth as much as they earn. So this is why healthcare is attached to having a job and earnings. Canada has a different philosophy that you keep as many people healthy as you can and in turn they will be productive over all. There is really no way that these meet in the middle. I know of two cases personally where people were seriously or fatally ill (one person ended up killing his own mother in
Re: (Score:2)
I don't know why you're trying to turn it into a criticism of the American healthcare system. Yes, there are problems with it, but at least a person in a mental health crisis can be seen right away instead of having to wait two years. And, as I said, I went through that process while unemployed and entirely broke.
Re: (Score:2)
Mental health care for lowincome people in the U.S. is technically available but extremely difficult to access in practice. Medicaid is the main source of coverage, and it pays for most mental health services nationwide, but access varies widely by state because each state sets its own rules. Even with Medicaid, people run into major barriers: huge provider shortages, long waitlists, and the fact that many psychiatrists don’t accept Medicaid due to low reimbursement rates. More than half of U.S. count
Re: (Score:2)
Though I wonder how many of the problems can be traced to both the US and Canada being mostly empty. Of course there are a huge number of US counties that don't have psychologists, I'm surprised that it's only half! There are so many rural counties without enough residents to keep a shrink in business. It'
Re: (Score:2)
That's why there are governments that collect taxes. They are there to solve such problems that won't be solved by capitalism.
Re: (Score:3)
Re: (Score:2)
You must be in the US, the only country where people don't seem to want the government to help them. Meanwhile, your neighbor to the north has managed to be very successful with capitalist socialism.
Re: (Score:2)
But you probably mean something far simpler. In a Capitalist Democracy, the line between public and private services can be drawn in a number of places. In the US, we prefer more private to public, but in Northern Europe, the line is drawn much further on the public side. But still Capitalist Democracies. Canada,
Re: (Score:2)
Ok will Canada is, in fact, capitalist socialism. I'll believe every school of economics over you. You just want to discount it because it isn't perfect. That's what the right wing does. They discount everyone because they see a flaw but they don't correct their own.
Re: (Score:1)
That ship has pretty much sailed when it comes to de facto state owned media like in Canada.
Re: (Score:1)
Orwelian (Score:1)
A country needs nationalized human intelligence (Score:2)
Re: (Score:3)
The insidious part about using free AI is that you're training something owned by someone else to do your work. That means whomever owns that AI now has the option of replacing some aspect of your work, rendering you redundant.
For the short term, free AI empowers the unskilled masses because they can use what skilled people misguidedly trained it to do. For the long term, AI only empowers the wealthy who own it, to the harm of the unskilled and skilled alike.
Re: (Score:2)
Change, and advancement in automation and technology, have been constantly altering the specific work and roles of humans for all of history. Shouldn’t we be used to that by now? I have humility in the sense I don’t believe myself to be intelligent- which funny enough is perhaps my greatest i
Re: (Score:2)
It's interesting that you took this to intelligence and software coding when I was talking about skill and had not limited my criticism to software coding. Honestly at the time I made my post, software wasn't even on my radar, and skill only weakly correlates with intelligence.
AI is sooooo misunderstood (Score:2)
I think there's way too many people who imagine AI to be some sort of Stuxnet, and they're letting their imaginations run wild. It's all pareidolia [wikipedia.org] at work. AI is just an amalgamation of training data. Think of it like hamburger...when you look at what comes out of the meat grinder, you can't say to yourself, "That morsel came from the shank, and that little bit must be the filet, and that tidbit there came from the rib." It all clearly came from somewhere, but when blended together, you can no longer di
Re: (Score:2)
Can we perhaps stop trying to anthropomorphize an algorithm?
Yeah! AI hates that!
People who don't know how to think can't recognize the difference between token diarrhea and thinking.
Re: (Score:1)
Yeah, keep believing it's not going to get baked into the National security defense network... they'd never do something like that!
Right now, it's not intelligent... trust us computer geeks when we say 'they're already working on pushing it into AGI and beyond (why would they be working on human brain tissue grown on a processor?).
Isn't this (the current iteration) how stuff like The Matrix or SkyNet or I, Robot began? Someone tried it, applied it to robots, automated everything, the AI took over and decid
Re: Stop! You're both right! (Score:2)
Wait, will AI finally bring the Slack?
No. (Score:2)
Because corporate greed will "trickle down" to the plebs in the form of minor life improvement that they should be thankful for.
It's true the government often has a poor reputation running things...but ultimately it has been running national services for decades. So it can be done. It's not perfect but can we really trust a handful of companies with all our AI queries, automation, reapo
Re: (Score:1)
Or should governments wait and see how the private investment turns out, see what the outcomes are, and then start asking if they have any need to establish their own public models? Which, at that point,
monopolies (Score:3, Interesting)
In a fantasy world... (Score:2)
...governments are "aligned with our collective good"
In the real world, governments are aligned with the powerful
Sounds viable (Score:2)
I would certainly welcome a national fund (could even be multinational and accept donations) for an open weight AI with full ecosystem even including hosting, that is fully open source and available for local LLM use as well. Not a Canadian but just by building it, could benefit from network effects.Including info about the training set, training methods and system prompts and allowing it to be configured freely could usher in a eave if experimentation and support the growth of centers of excellence, and a
Not exactly "nationalized" (Score:2)
When I hear "nationalized", I hear "government takes ownership of privately owned organizations." I don't think this is what they're proposing. Without just compensation, that would be wrong.
I don't see the Canadian government shelling out, say, $400 billion to buy Anthropic or whoever but maybe they would. It sounds like what they want instead is for the Canadian government to create an entirely new LLM and AI apps using nothing but government funding. That's not a cheap undertaking.
However, if that's what
Re: (Score:2)
Don't read too much into it. Under all the flowery language, what they want is more govt money for AI to compete with the US.
everything (Score:2)
Why not just nationalize everything? The argument would be the same.
Canada is already Communist (Score:1)
Rephrased (Score:2)
eh-I, you hoser (Score:2)
It's eh-I you hoser
Canada Says Yes / Oui ! (Score:2)
We all met at the Tim Hortons, and the answer is Yes! Now that the stinky orange meanie in the whitehouse has kicked us out of the proverbial basement, a big 'ol maple-flavored AI brain sounds like a good idea.
ok (Score:1)
Re: (Score:1)
Canada is getting out over its skis if it's even thinking about this now. "Hey, let's make a lot of ill-founded assumptions and throw a ton of taxpayer funds after them. What could go wrong?"
No. Wait to see how the industry shakes out. Wait to see if you actually need to take any action. Don't put taxpayer funds at risk unless you really have to.
Canadian AI (Score:2)
Just what Canadia needs (Score:1)
An AI bot that will report you to the Mounties if you ask it why your hip replacement will take 15 months or your neighborhood is suddenly full of Congoloids.
An idea so great, it must be made illegal not to finance it.
Looks like America's Hat has decided to be even more Marxist. Color me unsurprised.
Re: (Score:2)
Oh, as opposed to corporations using it to manipulate users?
Re: (Score:1)
Are you trying to establish a false dichotomy?
Aren't both things true?