Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
AI Technology

AI R&D is Booming, But General Intelligence is Still Out of Reach (theverge.com) 96

The AI world is booming in a range of metrics covering research, education, and technical achievements, according to AI Index report -- an annual rundown of machine learning data points now in its third year. From a news writeup, which outlines some of the more interesting and pertinent points: AI research is rocketing. Between 1998 and 2018, there's been a 300 percent increase in the publication of peer-reviewed papers on AI. Attendance at conferences has also surged; the biggest, NeurIPS, is expecting 13,500 attendees this year, up 800 percent from 2012.
AI education is equally popular. Enrollment in machine learning courses in universities and online continues to rise. Numbers are hard to summarize, but one good indicator is that AI is now the most popular specialization for computer science graduates in North America. Over 21 percent of CS PhDs choose to specialize in AI, which is more than double the second-most popular discipline: security / information assurance.
The US is still the global leader in AI by most metrics. Although China publishes more AI papers than any other nation, work produced in the US has a greater impact, with US authors cited 40 percent more than the global average. The US also puts the most money into private AI investment (a shade under $12 billion compared to China in second place globally with $6.8 billion) and files many more AI patents than any other country (with three times more than the number two nation, Japan).
AI algorithms are becoming faster and cheaper to train. Research means nothing unless it's accessible, so this data point is particularly welcome. The AI Index team noted that the time needed to train a machine vision algorithm on a popular dataset (ImageNet) fell from around three hours in October 2017 to just 88 seconds in July 2019. Costs also fell, from thousands of dollars to double-digit figures.
Self-driving cars received more private investment than any AI field. Just under 10 percent of global private investment went into autonomous vehicles, around $7.7 billion. That was followed by medical research and facial recognition (both attracting $4.7 billion), while the fastest-growing industrial AI fields were less flashy: robot process automation ($1 billion investment in 2018) and supply chain management (over $500 million).

This discussion has been archived. No new comments can be posted.

AI R&D is Booming, But General Intelligence is Still Out of Reach

Comments Filter:
  • This series is a pretty good overview [youtube.com] of modern neural networks.
  • ...and see a distinct lack of intelligence, in fact the leader of our country seems to be as intelligent as a chicken plucking out messages on a typewriter

    • Trump is the literal product/reflection of the USA and its citizens.

      Most everyone in the USA is just a version of Trump themselves. That fact pisses a lot of folks off though so they are going to help make sure more Trumps get into office while they deny their own parts in the culture that creates Trumps.

      I warned the Obama lovers that there will be a counter Stroke from this blind adoration of a terrible person lauded by a bunch of blind people. Guess what you got in return? A terrible person lauded by a

      • You're saying Americans tend to be more affluent, successful, and confident?

        Ok, yeah, I guess I can see that.
    • Er, I was trying to figure out which country you are talking about, but the list of those fitting your description got way too long.

  • How do you teach AI that eating Tide Pods is a good idea?
    • Pain and Darwin awards.
      That is what they are there for.

      E.g. belly pain, when eating them. Hospital pain. Being laughed at pain.

      Or in AI terms: Triggering of the neurons that inhibit and weaken the neural link between sensory input from Tide pods and the output actions of eating them.
      Plus: Selecting for the nets that don't do that, and overwriting the others with fire. It's the only way to be safe.

      Of course you can (pseudo-)"protect" them, so they never learn. ... That way you get kids that eat Tide pods in

    • by Tablizer ( 95088 )

      We just invented Deep Darwin Networks. Nodes that make dumb mistakes are eaten by nodes that don't. Where's our Nobel?

    • Easy...just connect the AI to social media. Before you know it your AI will be chowing on the Tide Pods with a side of cinnamon while choking other objects in the lab and assigning self-harm tasks to other processes on your server.

      And remember...the two most common elements in the Universe are hydrogen and stupidity. (Helium is a distant third place.)

  • You know what nerves do? What their basic job is?

    They communicate and store information. With other nerves, and with organs.

    That's why you can remember stuff, walk, talk to people, and believe you are a human being, and everything else.

    Thus far, our attempts to 'scan' nerves have largely been rather crude. Bounce things off of them, see their shape, listen for heat from them, etc.

    But when we can really interface and communicate with a single nerve, you know what we can do?

    We can query it for contents. T

    • "We can query it for contents. Then we can ask what its neighbor knows. Then we can figure out what those mean in context. Then we can ask that neighbor, and so on and so forth."
       
      Wow, how does Ryan Fenton know all this? He must have done lots of experiments to prove his statements. The guy must be a leader in the field!

      • Just going of of what nature does (see later in the post), and yes - what other small-scale nerve studies have shown in various contexts. None of this is cyber-punk. It's practical, every-day events.

        Unless you think metastudies and nature observations don't count as valid, because they didn't perform the steps themselves.

        Besides - I'm not claiming to be a scientist for the sake of these observations - just making a reasonable statement from observations.

        If you want to disagree - then make statements of di

        • "You can go from one nerve to querres of brain data - therefore, you should be able to learn from brains if we just know the language enough."

          That's amazing! Why would I be skeptical of your claims? You are Ryan Fenton!

        • "No offense taken either way - skepticism is crucial."

          You are speaking to a consensus = infallible person. He is not able to comprehend skepticism in any meaningful way. He is the literal definition of a "flat earther" until the majority became "round earthers" so now he is a round earth acolyte. The very definition of a sheep.

          Like you alluded to in your OP, many people, including AI researchers appear to not understand that the Brain is not the only organ in the body processing data. There is all sorts

    • by Kjella ( 173770 )

      But when we can really interface and communicate with a single nerve, you know what we can do? We can query it for contents. Then we can ask what its neighbor knows. Then we can figure out what those mean in context. Then we can ask that neighbor, and so on and so forth.

      Even if a neuron in the brain is a simple as a weight - indications it that's not, but is actually a small state engine in itself - you'd get >100 trillion weights that only makes sense for that particular person. We know that we have specialized brain centers for sight, sound, smell, memory etc. but the actual encoding of information is going to be highly individual. A person who's worked in stables shoveling horse dung is going to have an entirely different set of neurons firing than a person who's onl

      • by narcc ( 412956 )

        You've hit on the reason why AI is on a dead end track. You simply can't get semantics from syntax alone.

        What RyanFenton is pushing here is religion. He seems to think, for no reason that I can see, that science validates his beliefs. I wouldn't bother trying to engage with him further.

        • by jythie ( 914043 )
          I think it is not that AI is on a dead end track, but that the research money is just not going into general intelligence. When I see people working on the general problem they pull from a whole bunch of sub disciplines to try to tackle it. But the resources, be they money or social credit, are pouring into the domains that have the most direct industry applications.
          • But the resources, be they money or social credit, are pouring into the domains that have the most direct industry applications.

            Mostly because we have no fricken idea how to achieve artificial general intelligence,
            and we won't until we have a much better understanding of real
            general intelligence.

    • They communicate and store information.

      Wait, nerves store information?

      Hmmm, I may be mistaken, but I don't think that's right. They transmit impulses but I don't think they store anything.

      • by djinn6 ( 1868030 )

        If that was the case, then every coma would result in amnesia, since your working memory gets wiped by the total system shutdown.

    • But when we can really interface and communicate with a single nerve, you know what we can do?
      We can query it for contents.
      Then we can ask what its neighbor knows.
      Then we can figure out what those mean in context.
      Then we can ask that neighbor, and so on and so forth.

      Sure, given an arbitrarily long period of research.

  • It really has been amazing progress. For example, Tesla will have full self driving by the end of 2019 and the Tesla Robotaxi network will be rolling out in 2020. https://www.usatoday.com/story... [usatoday.com]
     
    The future is great!

    • But seriously, Waymo is doing this is doing this in Phoenix, now, for real.
    • by gweihir ( 88907 )

      Hahahaha, no. SAE 5 is still some time ahead except for slow speeds and predetermined and mapped routes. But 10 years or so is realistic at this time. And even a somewhat crappy SAE 5 is massively better than most human drivers and will be massively safer.

      • A crappy SAE 5 is the vehicle that will do stupid shit human would never do when sober like drive you off a cliff or straight into a barrier, dog, child, jaywalker, or object it is unfamiliar with. Why do you hate people? How many miles a day do stupid, drunk, untrained, stressed out, distracted, overly caffeinated humans drive every day without accidents? No, until we get a real level 5 autonomous vehicle that doesn't fuck up all the time, keep them on test tracks and in your neighborhood, not mine.
  • Well D U H. (Score:5, Insightful)

    by BAReFO0t ( 6240524 ) on Thursday December 12, 2019 @01:11PM (#59512822)

    When you build a bunch of static matrix multipliers and pretend it is "AI", or even remotely close to real neurons. . .
    . . . don't be surprised, if you can't actually achieve AI!

    I've been saying this ever since I wondered why their neural nets were so blazingly fast, and mine from ye olden days so slow: Because I tried to simulate actual physical neurons, while they basically did the "a perfectly spherical horse on a sinusiodal trajectory" version. And mine were always learning, while theirs got trained once, and then frozen.

    Sorry, even the amount of memory you need, to store the full(!) state of all neurons and synapses/dendrites in a human brain, that is needed even when it is frozen is stil far away, even for a modern supercomputer. Let alone a smartphone.

    And don't say "You don't need all of it.". Because that is why you're not there yet, and never will, until you change that mindset.

    • by 110010001000 ( 697113 ) on Thursday December 12, 2019 @01:21PM (#59512862) Homepage Journal

      The consensus of AI experts disagree with you. That many experts can't be wrong. They would have no incentive to exaggerate things.

      • "The consensus of AI experts disagree with you."

        The good ole "consensus = fact" fallacy.

        "That many experts can't be wrong."

        ha ha ha.... if you really believe that then you cannot be helped. History is full of loads of experts being wrong.

        "They would have no incentive to exaggerate things."

        That is like saying there is no incentive for the Police Department to exagerate their crime fighting statistics.

        Are you even awake?

      • The consensus of AI experts disagree with you. That many experts can't be wrong. They would have no incentive to exaggerate things.

        So what you're saying is "the science is settled"? Now where have I heard that before...

      • by gweihir ( 88907 )

        Was this consensus determined by AI? If so, it must obviously be true!

      • by jythie ( 914043 )
        'consensus' is not in the vocabulary of AI experts. It is a field with a lot of in fighting and disagreement, even within GOFAI and New Fangled camps
      • That many experts can't be wrong ??!!

        Do you know what the experts were saying about who would win the 2016 election?
    • Always nice to see that there are still people around who don't engage in Magical Thinking.
      • by gweihir ( 88907 )

        Well, "He is the Messiah! I should know I have followed a few!"....

        Those that fall for the current hype jut fall for the next one when the current thing fizzles. AGI is at best a very long-term research goal (as in > 50 years and no assurances), at worst impossible. Currently, we do not even know enough to have a scientific basis for that assessment.

    • I tried to simulate actual physical neurons... don't say "You don't need all of it.". Because that is why you're not there yet, and never will

      Citation needed; there's no convincing evidence that we require full fidelity. There's a much more obvious reason why we're not there yet, and that is pure scale. The human brain has 10^11 neurons and 10^15 synapses. Our largest "AI" networks have only 10^8 neurons, so of course they're not even close. We may also need more accurate simulations of each neurons, as you claim, but that has not yet been established.

      OTOH we can produce a wide range of narrow "AI" systems with basic neuronal simulations at curre

      • by djinn6 ( 1868030 )

        OTOH we can produce a wide range of narrow "AI" systems with basic neuronal simulations at current scale that are surprisingly successful - better than humans - at very specific tasks.

        But there's no particular reason to think that narrow AI would be any better at the task than custom code written by a guy behind the keyboard. It's being used right now because the guy is expensive and slow, while the narrow AI is both "good enough" and a massive investor money magnet.

        Broader-discipline tasks (e.g. true level-5 driving) may be largely solvable by a combination of more narrow systems

        I agree, but I'll bet when the car turns right at a stop sign, there's hand-written code that says to turn on the signal light. If it was a neural network trained to do that, you'd expect it to fail every once in a while beca

  • by Tablizer ( 95088 ) on Thursday December 12, 2019 @01:22PM (#59512886) Journal

    Existing AI companies are not producing enough revenue to justify the investments being made from a financial perspective. That is, if an investor looks around at their options in multiple industries, AI doesn't look good on price vs. results spreadsheet.

    In the past this usually lead to an unpleasant bubble poppage. Is it different this time, and how can one be sure? AI has progressed in fits and starts in the past such that expecting a smooth upward curve in progress doesn't look rational.

    I'm sure there will be incremental improvements in the shorter term, but that's not enough to carry the current investment attention level.

    • The end of the summary says which areas are receiving the funding, which do you feel is over-resourced? Obviously there will be winners and losers long-term, but that in itself doesn't mean there is a bubble.
      • by Tablizer ( 95088 )

        Some sub-categories may indeed avoid "the bubble", but in general it looks like AI is over-leveraged, based on past patterns.

        Which sub-category do you feel has the most promise (least bubbly)?

    • by cusco ( 717999 )

      Full disclosure: I work at Amazon and used to work at AWS.

      AWS has a number of options, some of them fairly inexpensive, available now that allow companies to try AI approaches to their workflow. My understanding is that they're rather popular, and I suspect that a number of startups that were trying to sell this service are going to fail when customers figure out how easy some of it is to apply.

      • Those startups may fail, but people finding out easy it is to replicate what they can to is not likely going to be the actual reason they fail.

        There are lots of products my company uses that I can replicate for much cheaper. There are also lots of products that my company uses that overlap in their respective functionalities.

        Let me tell you that no matter where you go, there is a large amount of resistance for companies to build tools that work best for their environments vs them going out and buying somet

        • by Tablizer ( 95088 )

          Indeed! Most humans are ultimately social creatures first. Logic and rationality come second.

          It may be true that better AI tools exist that they could potentially use. But that applies to everything in their organization, not just AI. They will ignore good AI the same way they ignored good A, good B, good C, etc.

          Further, there will still need to be somebody equivalent to a systems analyst to help adopt and inspect the AI so that it fits the domain. Somebody in the tuning loop still has to know the domain. A

          • by jythie ( 914043 )
            Eh, be careful about downplaying the social element. One of the reasons you see so many pre-packages solutions is the human element. Custom stuff tends to have a higher learning curve, less support, and less chance of finding people already familiar with it, so it comes with a significant human oriented cost. External solutions, even when they do not fit quite as well, have things like support (community or paid), documentation, polish, and you can find hires that already know at least something about t
    • Existing AI companies are not producing enough revenue to justify the investments being made from a financial perspective.

      What's more important for investment than price vs results is anticipated future results. This is the whole point of venture capital, and is what drives much of the share market.

      You don't need a smooth upward curve, and in fact a lot of investment expects to fail - just so long as they can also find that one spike to capitalise on that pays for it all. Reliable results are great for pension funds, but there's no shortage of money looking for longer shots. Yes you'll get the occasional bubble, but there's pl

      • by Tablizer ( 95088 )

        Most investors want relatively quick returns. Yes, there is a portion of longer-term investors, but that's probably not most of the source of tech investment.

        Plus, a lot of leading edge companies fail. Betting on bunches of leading edge companies alone has not done that well historically. Almost all the big dot-coms of 2000 are gone, for example. (Google and Amazon were not big then.)

  • by Frobnicator ( 565869 ) on Thursday December 12, 2019 @01:26PM (#59512918) Journal

    Collectively we've been redefining "intellegent" and "artificial intellegence" for decades.

    If you go with the Turing test --- the original one from 1950 --- the game where someone decides if the thing answering questions is a human or a computer, that's already done. Plenty of AI's can handle it amazingly well.

    And if you define it to mean able to replace or augment humans, you'll notice we've replaced humans at a tremendous number of tasks. Modern mathematical proofs are run through compute engines that even world-renowned mathematicians struggle to understand. Learning algorithms inspect images for contraband, they convert noise into an analysis of bridge integrity, they augment physicians and scientists and other skilled jobs. They can drive cars better than humans do. They can play games like Jeopardy and Chess and Go better than humans.

    Articles like this are merely kicking the can down the road with yet another definition that isn't met.

    But as Turing wrote in his 1950's paper, "thinking" is difficult to define. Just like Turing, we replace "thinking" and substitute out with different tasks. Instead of substituting it with the imitation game, we're substituting it with chess-playing skills, with driving skills, with identification skills, and more. Yet we're no closer to answering the initial existential question: "Can machines think?

    • Collectively we've been redefining "intellegent" and "artificial intellegence" for decades.

      I define it as a computer that can perform spell check on a set of words. Still some work to do.

      • Collectively we've been redefining "intellegent" and "artificial intellegence" for decades.

        I define it as a computer that can perform spell check on a set of words. Still some work to do.

        It has gotten worse on smartphones lately, not only does my phone want to use common dictionary words in place of scientific or computing terms, it has started to want to replace common dictionary words with more common dictionary words. I have to reread everything I type on my phone, not because I'm making mistakes but the autocorrect is making more mistakes than I am.

        • Why don't you switch off autocorrections?

          • Why don't you switch off autocorrections?

            Because I suck a typing on a touchscreen and autocorrections were the only way I could get a legible e-mail on my phone. Now I'm getting better and they are getting worse, switching it off may now be the best choice.

            • You only need spell checking. Then you 'click' on the misspelled word and pick from the spell checkers suggestions.

        • by gweihir ( 88907 )

          MS Word is also pretty much a fail. Had to remove several insults from the last report I wrote, just because it "corrected" the name of the customer in really bad ways.

    • They can drive cars better than humans do.

      No.

    • Collectively we've been redefining "intellegent" and "artificial intellegence" for decades.

      If you go with the Turing test --- the original one from 1950 --- the game where someone decides if the thing answering questions is a human or a computer, that's already done. Plenty of AI's can handle it amazingly well.

      And if you define it to mean able to replace or augment humans, you'll notice we've replaced humans at a tremendous number of tasks. Modern mathematical proofs are run through compute engines that even world-renowned mathematicians struggle to understand. Learning algorithms inspect images for contraband, they convert noise into an analysis of bridge integrity, they augment physicians and scientists and other skilled jobs. They can drive cars better than humans do. They can play games like Jeopardy and Chess and Go better than humans.

      Articles like this are merely kicking the can down the road with yet another definition that isn't met.

      But as Turing wrote in his 1950's paper, "thinking" is difficult to define. Just like Turing, we replace "thinking" and substitute out with different tasks. Instead of substituting it with the imitation game, we're substituting it with chess-playing skills, with driving skills, with identification skills, and more. Yet we're no closer to answering the initial existential question: "Can machines think?

      Yes, and "computers do for-loops better than any human".

      Everything you mentioned above has no "thinking" involved whatsoever. As it typical these days, when confronted with failure, you just re-define the terms.

    • You are clearly from the future and your time machine is way cooler than even ASI.
    • by gweihir ( 88907 )

      If you go with the Turing test --- the original one from 1950 --- the game where someone decides if the thing answering questions is a human or a computer, that's already done. Plenty of AI's can handle it amazingly well.

      Actually, they do not. They can simulate rather dumb people with a communication impairment, but that is it. No software pretending to be a smart, educated human being has ever passed the Turing test against a smart and educated examiner.

      Modern mathematical proofs are run through compute engines that even world-renowned mathematicians struggle to understand.

      No. Or at least only in very rare cases. What is actually done is that smart people break the proof they found in tiny, simplistic bits and explain them to the machine. The machine can then determine the validity of the proof. But finding proofs? Completely beyond computers

    • If you go with the Turing test --- the original one from 1950 --- the game where someone decides if the thing answering questions is a human or a computer, that's already done. Plenty of AI's can handle it amazingly well.

      Bullshit. That's not at all what Turing described.

      Turing's paper referred to the "Imitation Game", a popular parlour
      game at the time, as a model of something that might
      be used as a preliminary test of competence in a natural-language
      system. In the IG, there are two people - a "genuine"
      and an "imposter" - who converse for a period of time.
      For example, the genuine might be a woman,
      while the imposter is a man pretending to be a woman.

      The genuine knows that their role in the game is to unmask the impost

    • "Plenty of AI's can handle it [Turing test] amazingly well."

      That is amazingly false.

      A couple years ago AI challenges were lamenting that they were nowhere near that to the point where they needed to find a new winning threshold.

      All the claims of wins have been where the computer said he was a second language english student who was 6 and has autism. Also they tend to put 30 second deadlines on it. Turing explicitly disallowed that from a test.
  • Face it, Magical Thinking people: We don't even have a clue how 'thinking', 'consciousness', 'self-awareness', or any of the other key qualities of a human brain actually work; we can't even really define all the qualities necessary for human-level intelligence! So you can forget about all this meme 'deep learning algorithm' nonsense -- and make no mistake, it's a meme at this point, one started and perpetuated by marketing departments, directed by their CEOs, because they don't want to go bankrupt from all
    • Be careful, you are running afoul of the group-think-stink here at Slashdot.

      Facts take a back seat to politics and fantasy here. They seem to be able to comprehend the idea that "if a politicians position depends on them not understanding something, then they will never understand it" but not comprehend that the same truth exists in all professions and businesses. AI has been so over "Trumped" that we now call simple if statements AI these days. The BS is rich and thick!

      We are now at "The Emperors New Cl

      • I have my own theory about all that, actually.
        Take a look back at books, movies, and TV for the last 50 years or more. It's full of fantasy images of autonomous robots, and miraculous artificial intelligence. Robby the Robot from Forbidden Planet. The Robot from Lost in Space (even as bad as that show was, in retrospect). Various computers and synthetic entities from the original Star Trek (and later incarnations of Star Trek for that matter). Isaac Asimov and his I, Robot collection of short stories (and
    • by gweihir ( 88907 )

      Face it, Magical Thinking people: We don't even have a clue how 'thinking', 'consciousness', 'self-awareness', or any of the other key qualities of a human brain actually work; we can't even really define all the qualities necessary for human-level intelligence!

      And _that_ is the state of the art in Science. We do not know, all tried approaches to produce even a tiny bit of AGI have failed and there is not even a credible theory how it could be done. At the same time, currently known physics has no mechanism for consciousness and probably none for intelligence on the level of a somewhat smart human being. Known physics is also known to be wrong, or rather it is known that something pretty fundamental is missing or different from the current model. This is a state u

    • by HiThere ( 15173 )

      I disagree. We may not have the correct idea, but several different people have different ideas as to how "thinking", "consciousness", and "self-awareness" work.

      I, e.g., consider all those problems quite simple, and the religious aura that people put around them seems foolish to me.
      E.g. consciousness and self-awareness are tightly connected. Depending on precisely which empirical definition you use they are either the same, or one is a higher octave of the other. And the at the basic level, any homeostat

      • Sorry, I can't read "a thermostat connected to an air conditioner and a heater is self-aware" without laughing out loud.
        • by HiThere ( 15173 )

          And a bit is a unit of information. You don't expect minimal examples to be very capable.

          If pressed I'd admit that it would probably be possible to design a simpler example of self-awareness, but it would be difficult.

  • by taustin ( 171655 ) on Thursday December 12, 2019 @02:13PM (#59513148) Homepage Journal

    But Facebook's "People you may know" are not only complete strangers 99.999% of the time, they're usually strangers that live in other states (or countries), with zero common interests (and about one in three are some sort of muscle head bodybuilder) with zero common "Friends". And YouTube's recommended videos are about 90% stuff that's from the same channels as something I watched less than a minute of once, 5% stuff I've already watched all the way through, and 4% stuff that I've told it repeatedly I'm not interested in.

    Yeah, AI research is booming, but AI isn't advancing at all.

    • And what exactly has that to do with AI?

      One of your FB 'friends' uses the app and granted it access to contacts and phone numbers.
      The suggested guy has or had one of those numbers. Because he is a contact of one of
      your FB friends a stupid programmer, more likely a product manager, decided guys like this
      are good friend suggestions.

      Or worth: the guy knows you for some reason and has your number in his contacts ...

      • by taustin ( 171655 )

        These are people that I do not know, and that no one I do know knows, and has no connection with. This is a list generated by an AI. Or generated completely at random, which is the same thing.

        • Your FB 'friends' might not 'know' them on FB, they still might have them in ther phones contacts, or one who is thier contact has them as contact.

          Anyway, regardless how they do it, there is most certainly no AI involved.

          • by HiThere ( 15173 )

            Well, there is AI involved at least in the 1970 sense of the word. But I think there's less AI involved in that than in Sameuls checkers program. That's a guess, of course, because I don't know how Facebook's recommendation system works.

  • WTF? How do you recoup $4.7 billion via profits by recognising peoples faces? Advertising related?

  • Because the US is the greatest!

    Not really. It's because we still have some good universities and we attract talent from around the world. It's because we have an entrenched tech industry that can pay for advanced research. It's because 'our' scientists, many of whom were born in other countries, invented stuff here instead of their own country.

    Higher education, elite immigration and an attractive tech industry--keys to US success in one small area of endeavor. Now lets work on regular education, a fair amou

  • Love to exploit all of the data they can get their grubby hands on...

    With the other topics in the news of no encryption for citizens being promoted and the blatant 3rd world disregard for privacy, its obvious that without data, that b!tch is worthless... I mean.. The consumer/retail AI sector will have to resort to fabricated data just like pre-AI times (think pay-to-play medical and scientific studies or marketing in general) if they're denied access to all of that free, pilfered data.

    Without data, c

Beware of Programmers who carry screwdrivers. -- Leonard Brandwein

Working...