Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
AI Technology

Are We on the Cusp of an 'AI Winter'? (bbc.com) 92

The last decade was a big one for artificial intelligence but researchers in the field believe that the industry is about to enter a new phase . From a report: Hype surrounding AI has peaked and troughed over the years as the abilities of the technology get overestimated and then re-evaluated. The peaks are known as AI summers, and the troughs AI winters. The 10s were arguably the hottest AI summer on record with tech giants repeatedly touting AI's abilities. AI pioneer Yoshua Bengio, sometimes called one of the "godfathers of AI", told the BBC that AI's abilities were somewhat overhyped in the 10s by certain companies with an interest in doing so. There are signs, however, that the hype might be about to start cooling off.

"I have the sense that AI is transitioning to a new phase," said Katja Hofmann, a principal researcher at Microsoft Research in Cambridge. Given the billions being invested in AI and the fact that there are likely to be more breakthroughs ahead, some researchers believe it would be wrong to call this new phase an AI winter. Robot Wars judge Noel Sharkey, who is also a professor of AI and robotics at Sheffield University, told the BBC that he likes the term "AI autumn" -- and several others agree.

This discussion has been archived. No new comments can be posted.

Are We on the Cusp of an 'AI Winter'?

Comments Filter:
  • by ganv ( 881057 ) on Monday January 13, 2020 @02:59PM (#59616598)
    ... isn't "winter". It is just a slow realization that very hard things like general intelligence didn't because easy just because of a new twist on using neural networks. Maybe at some point the hype swings will occur at such short time scales that we'll stop noticing and we can get back to the interesting questions about what tasks currently done using human intelligence will be automated in the near future.
    • That and some investors are seriously questioning their (lack) of return on investment.
      • by Tablizer ( 95088 )

        That and some investors are seriously questioning their (lack) of return on investment.

        I don't know of any "Warren Buffett" of technology. While some investors got rich off of a single company, I don't know of any investor who consistently does via multiple tech companies.

        Let's set a threshold of having invested in at least 10 tech companies, and who beat the general market with total tech company investment by at least 3%, and have had at least 50% of those companies beat the market. This is so one can't

    • by Anonymous Coward

      The problem I have with this premise and news story is that the only people making such suggestions seemed to be the media themselves, of the people actually working on AI research I don't think anyone was making any such claim that we're "nearly there" or similar.

      So when we talk about an AI winter, what we're really saying is that the media cycle is bored of this particular subject for now and so is moving on.

      In terms of actual AI research itself, nothing has changed. The pace will continue as it always ha

      • by ganv ( 881057 )

        The media and the hype seekers who the media cater to are the core problem. But I fear the public gets the media they deserve. People want sensational stories where everything changes today because of this new breakthrough. Someone is willing to give it to them, and that becomes the media. Maybe slow media will join slow food as a trend among thinking people.

        And I agree that the actual AI research is progressing as usual...slowly building systems that change everything. But it is hard to report on t

    • AI had a boom in the mid 80s followed by a hard crash. I don't think we're going to have a hard crash this time; there is too much real world adoption of AI technologies for that to happen. "AI Autumn" sounds about right for what is coming.
  • by Kiliani ( 816330 ) on Monday January 13, 2020 @03:10PM (#59616648)

    An artificial winter or fall in the age of global warming. What could be better?? Sorry ...

    Still: less hype may mean better results and more relevant use scenarios long-term.

  • I hope not (Score:5, Funny)

    by nospam007 ( 722110 ) * on Monday January 13, 2020 @03:18PM (#59616674)

    Siri, do my workout for me.

  • I always thought Marvin Minsky was considered the grandfather of A. I.

    Or did he disappear in so tiny of a puff of greasy smoke that he is written out of the history completely now.

    Regardless, whoever is standing at the grill flipping the nothingburgers , could you pass the mustard?

  • ... given MS track record. Also, someone gave me a very apt description of AI software: those who do not understand [their own algorithms in software] call it AI. Yes there's a lot of AI research going on but so far it merely seems to yield idiot savants.

  • by TomGreenhaw ( 929233 ) on Monday January 13, 2020 @03:26PM (#59616696)
    Machine learning is much better. It implies that machines self program with training examples.

    Artificial Intelligence implies vastly more with such great variety that it is nebulous and essentially meaningless.
    • by ceoyoyo ( 59147 )

      Not all AI involves learning. The original AI, which is still alive and well today, involves taking a bunch of facts, sticking them in a database, and programming methods to combine them.

      AI just means getting a computer to do something more like a person (or animal) would do it. It's not even necessarily better. AI arithmetic systems suck, just like people.

      • Comment removed based on user account deletion
        • You mean... BingoNot!

          Matching Patterns is not Intelligence, the very atoms themselves coalesce and make up patterns, match on them, and even change interactions based on them, are you going to say they are intelligent now too? Intelligence has a meaning, we literally have a dictionary that can tell you what Intelligence means. If you are not going to follow that and make it up as you go along, then you are going to fail. Just like all of these other "failures" in AI!

          If you cannot even use the right words

          • First... agree on the definition of things.

            That will never happen.
            "AI" is just a symbol that used to mean "Artificial Intelligence",
            just as "KFC" used to mean "Kentucky Fried Chicken".

            Nowadays, "AI" just means "solving problems without really knowing how",
            which is just as risky as it sounds.

        • by ceoyoyo ( 59147 ) on Monday January 13, 2020 @04:57PM (#59617132)

          People think their brains are magic. General or hard AI is an assault on the last bastion of our species' ego.

          Regular AI doesn't have much to do with that. The concept dates back to ancient Greek philosophers trying to come up with rules for thinking (logic) and people telling stories about making things like golems. The term itself was coined at a conference in 1956. Minsky was an attendee. Minsky himself was really more of a symbols guy. In fact, he was famously critical of the connectionist approach.

          Back in the 90s I had a professor who was a symbols AI guy. He made us all learn Prolog, which, in retrospect, was really valuable. The declarative logic style of Prolog is a very different paradigm than most popular languages today. After wrapping your brain around that, things like parallel programming and efficient programming in interpreted languages don't seem so hard.

          At the same time as I was taking Prolog and hearing about how wonderful symbolic AI was, I was doing a couple of self study courses building sonar-wielding robotic cockroaches controlled by neural networks.

      • The original AI, which is still alive and well today, involves taking a bunch of facts...

        The "original AI" is "Artificial Intelligence" and it doesn't exist yet. And yes, being a "marketing person" should be a capital offense.

        • by ceoyoyo ( 59147 )

          The first part of your sentence is correct, second part is wrong. You don't just get to define terms however you like. The term "artificial intelligence" was coined 65 years ago. And no, it's not capitalized unless it's the name of a course or something.

    • by Tablizer ( 95088 )

      Too bad, the term is entrenched. Maybe we can come up with an "internal" category system(s) for specialists to use, but I suspect there will be too many fuzzy boundaries and dispute points.

      machines self program with training examples.

      Even humans don't entirely do that. We rely on parents, books, teachers, etc. And, relying on an assistant of some kind is not necessarily a show-stopper. There may be multiple roads to "smart". Ideally a bot would be "self learning", but even without such, something very usefu

    • by Pseudonym ( 62607 ) on Monday January 13, 2020 @09:54PM (#59618044)

      Machine learning is much better.

      Oh, that's so 15 years ago, though! Depending on what decade you're in, here is the term that you should prefer:

      1950s: Artificial intelligence
      1960s: Machine intelligence
      1970s: Reasoning systems/Perceptrons (depending on which side you were on)
      1980s: Expert systems
      1990s: Intelligent agents
      2000s: Machine learning
      2010s: Deep learning

    • It's become nebulous and essentially meaningless because every time someone trotted out a new algorithm based on subsets of data they called it A.I. and the press sucked it up. And the reason why these people keep calling their shitty algorithm's A.I. is because they need the exposure to get funding for their shitty algorithm, and getting it splashed across the newspaper sure does help a lot.
  • by Waitfor1tt ( 6469518 ) on Monday January 13, 2020 @03:26PM (#59616698)
    Imagine you make your own moonshine.. Then, all of a sudden, everyone discovers moonshine and over-indulges. Now, people realize: moonshine is just another debacherous alcoholic beverage, and you just have a hangover. That's all it is... too much hype over a non-development, followed by hangover. In the end, anything constrained by 'moores law" can bathe in the light of hype when it's time is due.
    • If we sense something is over-hyped, I wish there were a consistent way to make money off it. Stock "puts" are one suggestion, but I don't know of any individual who got rich from such beyond one-off luck.

      Maybe the $'s are in e-books such as "Learn [Fad X] in One Week Unleashed Bible in a Nutshell Face First for Drooling Morons".

  • by Tablizer ( 95088 ) on Monday January 13, 2020 @03:26PM (#59616704) Journal

    Given the billions being invested in AI and the fact that there are likely to be more breakthroughs ahead [according to some]

    The implication that big breakthroughs will happen soon just because so much is invested is questionable. Often it takes multiple technologies to mature to get to the next level.

    For example, semiconductors were known about since the 1800's, but manufacturing technology was not mature enough to produce consistent and cheap components. And much of our current AI comes from the fact that computers got powerful enough to test and train complex neural networks. The theory behind them is mostly old, and some of it from the biology of the eye.

    And as long as we have to rely on chemical rockets to get into space, space-travel probably won't become mainstream, despite 60 years of space travel. Some kind of energy breakthrough will be needed. Throwing more money at chemical R&D will probably only create incremental improvements.

    Robot Wars judge Noel Sharkey, who is also a professor of AI and robotics at Sheffield University, told the BBC that he likes the term "AI autumn" -- and several others agree.

    I don't understand why an AI expert is a judge for the human-remote-controlled demolition derby known as "Robot Wars". Don't get me wrong, I love the metallic mayhem of Robot Wars, BattleBots, and King of Bots (China), but it seems the wrong person for the job.

    I believe Robot Wars is currently off the air, a victim of the Brexit slump by some accounts.

    • And as long as we have to rely on chemical rockets to get into space, space-travel probably won't become mainstream, despite 60 years of space travel. Some kind of energy breakthrough will be needed. Throwing more money at chemical R&D will probably only create incremental improvements.

      Bad example, as it involves fundamental limitations of our universe. Human brains run on ~20 watts and have relatively simple functional building blocks.
      The main difference between organic brains and ANNs is topology. Remember that before the deep learning advances, ANNs were really, really flat. The topology was absolutely trivial.

      Deep learning topologies are more intricate than the ANNs before them, but are still very far away from the complexity of organic brains. In addition to that, the topologies are

      • by Tablizer ( 95088 )

        Bad example, as [space travel] involves fundamental limitations of our universe

        No it doesn't. Nuclear power could get a ship going at say 25% the speed of light. We just don't know how to manage and tame it cheaply. There is also light-sails, in-orbit rail guns, and laser propulsion all within the laws of physics, but so far also difficult to work with. New and/or automated manufacturing techniques may change this.

        As far as your statements implying it's relatively easy to clone/emulate the more advanced asp

        • No it doesn't. Nuclear power could get a ship going at say 25% the speed of light. We just don't know how to manage and tame it cheaply. There is also light-sails, in-orbit rail guns, and laser propulsion all within the laws of physics, but so far also difficult to work with. New and/or automated manufacturing techniques may change this.

          None of these solutions get us beyond the escape velocity of Earth. Chemical rockets do. You know, the things we currently "rely on [...] to get into space". Your '[space travel]' substitution was misleading. It should have been:
          "Bad example, as [getting into space] involves fundamental limitations of our universe"
          My point still stands.

          As far as your statements implying it's relatively easy to clone/emulate the more advanced aspects of the architecture of biological brains, I shall await the pudding actually coming out of the oven to make a judgement on that.

          I did not imply that at all. I was trying to shed light on an area of ANNs where massive improvements can be made. Improvements that mainly require insight and cleverness, no

          • by Tablizer ( 95088 )

            None of these solutions get us beyond the escape velocity of Earth.

            Who said one propulsion technology has to do it all? And maybe someday we can have bots build ships on the moon or on/with asteroids.

            • The statement was: Bad example, as [getting into space] involves fundamental limitations of our universe [as opposed to ANN evolution].

              Please don't change the subject.

              • by Tablizer ( 95088 )

                It doesn't. You are wrong. There are plenty of tappable sources of energy in the universe. The bottleneck is mining, processing, and manufacturing, and not fundamental laws of the physics. Interstellar ships may take multiple generations between trips, but that's why babies were invented.

                Another technique I didn't mention is "space buses" with elliptical or figure-8 orbits distributed about our solar system. Passengers could transfer between orbits of these as needed to get to various destinations. Thus, mo

                • So, I'm sitting in my house, on planet Earth. I want to get into space.
                  Tell me which of the things you've mentioned will get me there.

                  • by Tablizer ( 95088 )

                    So, I'm sitting in my house, on planet Earth. I want to get into space. Tell me which of the things you've mentioned will get me there.

                    All of them have the potential. Your mom's basement may someday orbit Uranus.

  • The thing about AI is that it has exactly zero demand from consumers.
    I don't remember anyone looking to buy some gadget "because of the AI it features".
    At present AI is pushed by the likes of Google, Facebook, Microsoft mainly to analyze the large amounts of data they have collected through their spyware. Users are not asking for it.
    It is also pushed by those who want to get rid of as many jobs as possible (self-driving cars, no drivers to pay).
    And since AI is most of the time linked with spyware or more jo

    • by Anonymous Coward
      Not true, I love the way Google photos automatically organizes my photos by person, and also lets me search for objects in the photo, e.g. pizza.
      • by spudnic ( 32107 )

        But that's not AI. That's a really clever algorithm. Sure, it has been fed millions of data to base its results on, but there is no actual intelligence involved.

        • by Tablizer ( 95088 )

          How do you know? Some say AI is just "statistics with clever shortcuts".

        • And yet once something has been reduced to practice, it becomes "an algorithm."

          The phone company once predicted that by 1960 everyone would need to be a telephone operator. And by 1960 we all were: we instructed the switching system how to route the call, by dialing or punching in a telephone number. We used to have to go to AAA and get maps where some person with training highlighted a driving route (say between Boston and Milwaukee) with indications as to where there was road construction, routing around

        • by SoftwareArtist ( 1472499 ) on Monday January 13, 2020 @09:09PM (#59617950)

          "Artificial intelligence" is a technical term that's had an accepted definition for 60 years. But some people on slashdot have decided to ignore that definition and invent their own. This comes up so often, I keep the following in a document on my desktop ready to paste in when needed.

          The term "artificial intelligence" was coined in 1955 [stanford.edu] by John McCarthy, Marvin Minsky, Nathaniel Rochester, and Claude Shannon (all major figures in the development of computer science). Here's how they defined it:

          For the present purpose the artificial intelligence problem is taken to be that of making a machine behave in ways that would be called intelligent if a human were so behaving.

          This is how the field has consistently defined it for over 60 years. Notice all the things that definition doesn't require. The machine doesn't need to be self-aware. It doesn't need to understand what it's doing. It doesn't need to do the task in the same way a human would. It doesn't even matter if we have no clue how a human does it. All that matters is that it successfully does something "that would be called intelligent if a human were so behaving." That is what "artificial intelligence" means.

          When a human plays chess, everyone agrees they're using intelligence. Or when they translate a document from French to English. Or when they drive home from work without hitting anything, getting lost, or breaking any traffic laws. Therefore, when a computer does these things, that is "artificial intelligence". By definition. By the definition that has been widely accepted for well over half a century.

    • by Kjella ( 173770 )

      The thing about AI is that it has exactly zero demand from consumers. I don't remember anyone looking to buy some gadget "because of the AI it features".

      Consumers bought 92 million [voicebot.ai] smart speakers last year. The thing is /. like to ask extremely loaded questions, it's like "You know those GPS bracelets they put on criminals right? Now they're trying to make everyone voluntarily carry a radio beacon in their pocket, crazy am I right?" while normal people go "Having a cell phone in my pocket sounds pretty damn useful, where can I buy one?". Besides it's not as if B2B demand isn't real, maybe what you want is a self driving car or a pizza delivery bot but it's

  • Lower Standards... (Score:5, Insightful)

    by SirAstral ( 1349985 ) on Monday January 13, 2020 @03:57PM (#59616834)

    The standards for what AI is just too low now.

    We are calling advanced but dumb non-learning algorithms AI these days. We are a significant ways away from AI right now. We will not see it in our lifetimes based on what I am seeing. Every creature alive with AI can rewire itself... I do mean this literally. Neural pathways changes, dendrites remap based on needs, they fail with disease and damage, our motor stills are driven by these remapping and why practice is important. Computer do not do this... no software re-coding and learning is not even close to the same thing. We functionally change our compilers on input/processing/output while computers cannot do this for themselves. Until we begin work in this area, we are going to be very limited in what AI can achieve from a holistic perspective and be limited to using AI for only the simplest of reductive reasoning functions... which are still by no means a worthless pursuit, but lets stop calling that AI, because it is just not AI.

    If it finds the answer by random testing... its not AI. It has to arrive at the answer in learning method way. We humans do not get to pass math class by giving random numbers to the teacher until we guess the right one, we should not have this benchmark for machines either!

    • We are calling advanced but dumb non-learning algorithms AI these days.

      These days?!? Clearly you have no understanding of the AI field as a whole.

      There are ENTIRE subfields of AI that have *nothing* to do with learning algorithms.

      The issue isn't that we are "suddenly" calling non-learning algorithms AI - those have been part of AI since it's very inception. It's the addition of machine learning to the AI umbrella which is new, not the other way around.

      The issue here is idiots who don't know anything about the history of AI, and therefore erroneously assume that AI is ONLY "mac

      • "There are ENTIRE subfields of AI that have *nothing* to do with learning algorithms."

        that is the crux of the problem.

        If (you believe anything) { "Everything is AI" }

        There is critical point in which something is or is not AI. If the definition is too broad then even the most basic computer functions are Artificial Intelligence. The bar is just to low as you have just shown. But hey... you are not the first person to come along and change the definition of words to suit their ignorance.

        Intelligence
        https:/ [merriam-webster.com]

        • Here is how the term "artificial intelligence" was defined by John McCarthy, Marvin Minsky, Nathaniel Rochester, and Claude Shannon when they coined the term [stanford.edu] in 1955.

          For the present purpose the artificial intelligence problem is taken to be that of making a machine behave in ways that would be called intelligent if a human were so behaving.

          That has been the accepted definition of the term for 65 years. That is how everyone in the field has been using it since, I suspect, before you were even born. If you're using it to mean anything different from that, then you're using it incorrectly.

        • ...you are not the first person to come along and change the definition of words to suit their ignorance.

          This is what you are doing though. I am using the accepted definition as defined by the actual field, you apparently dislike GOFAI, so you are attempting to re-define a term that you don't even understand to begin with.

          ...then you need to define AI using a definition that exists...

          *I* am - YOU are not.

          no one you just pulled out of your ass or the industries ass!

          *I* am not - YOU are.

          P.S. you can cure your ignorance with a book or two...

      • Traditional AI is has a deep tradition, and there has been a lot of ambitious and fairly impressive work. But generally it hasn't been useful. Machine learning has a few key tricks, it's not especially deep or impressive, but for many important applications it's useful as heck (aka important).

        Both sides wish they could claim the ultimate crown of being both impressive and important, but neither side can. So they join into a somewhat unholy alliance to create a hybrid, one side impressive, the other side imp

    • We are calling advanced but dumb non-learning algorithms AI these days. We are a significant ways away from AI right now. We will not see it in our lifetimes based on what I am seeing. Every creature alive with AI can rewire itself... I do mean this literally.

      I think what you mean is "creatures with non-human intelligence", since these creatures possess a "natural" although not human kind of intelligence.

      If it finds the answer by random testing... its not AI. It has to arrive at the answer in learning method way. We humans do not get to pass math class by giving random numbers to the teacher until we guess the right one, we should not have this benchmark for machines either!

      The problem is that you appear to treat "AI" as a sort of holy word like "God". Many human activities considered "intelligent" don't involve learning (something new). For example, if I asked you what's "2+2" and you answered "4", would you have actually "learned" anything? No, unless I posed a trick question and said "Wrong, it's 10 since I'm talking base 4", t

  • by doom ( 14564 ) <doom@kzsu.stanford.edu> on Monday January 13, 2020 @04:12PM (#59616910) Homepage Journal
    If their AI was worth anything, wouldn't it be able to predict it's own winters? Why are we still paying attention to these "experts"?
    • Your implied definition of AI is artificial general intelligence. Few "experts" are predicting this will be available soon. There's also the narrower task-orientation definition that covers chess and Go-playing systems and robots that, in their ability to avoid or climb over obstacles, are functionally already as "smart" as the cockroach I sprayed with alcohol this morning, a natural intelligence that hadn't learned that messing around with somebody's sandwich isn't a very intelligent thing to do even if i
  • by mugnyte ( 203225 ) on Monday January 13, 2020 @04:36PM (#59617026) Journal

    Yeah yeah, we know the term "AI" isn't meaningful, except in the general public's perception of a semi-autonomous machine in human-like form and endless potential for questionable agency to power a film plot.

    The more important advances in form-fitting a larger number of inputs to an optimal solution path via Machine Learning will continue to find their uses. But "General AI", where a machine quickly surmises it's place, role and goals in a real-world, especially by keeping a model of it in several senses in highly-parallel processing, is leagues away. I doubt the physical architecture for such a thing has been invented yet.

    But we're headed that direction. Someone will probably build several generations of ML that try to solve candidate-architectures for General AI first, and chew-away at the problem(s). Once we see parallelism scaled way up, and innovative cooling solutions for highly-layered semiconductors, we may be getting closer. This is speculative and assumptive on my part, obviously.

  • The problem with AI is that the data scientists haven't taught it to use Agile methodology.
  • Superconductors? Stem cells? VR?

  • We've had a huge leap in development in areas associated with AI - image and speech recognition, pattern recognition, data analysis, decision systems.

    What we haven't seen one bit of is actual intelligence. There isn't a hint of self-awareness or a glimmer of systems evolving themselves beyond the design parameters. None of the machine learning and deep learning systems are setting their own goals or showing lateral thinking.

    We have, in short, found a set of very efficient algorithms that solve long-standing

  • The distinction between "hard" and "soft" AI? Let me know when someone develops hard AI, until then, I really don't care.

  • But between the year 1980 and the year 2017 they got all the way from Chess to Go! And they're going to stop now just as they hit that breakneck pace??
  • The only real innovation we've seen in AI since that last AI winter decades ago is cloud computing. ML is just 80's/90's neural nets distributed on someone else's machinery.

    Regardless, ML will become the new buzz for programmers making absolutely anything, much like OOP back when those AI ideas were fresh

    • And resnets, LSTMs, GANs, attention, modern optimization algorithms like Adam, modern RL algorithms like PPO... None of these count as "real innovations"? More compute power is great, but the improvements on the software side have been at least as important. Anyway, if you want to focus on hardware, GPUs have been way more important than cloud computing.

      • Most of those are rooted firmly in the ancient computing past and no, I don't think most of those are "real innovations," but incremental polishing of the same turd.

        GPUs are cool for bitcoin mining but all the interest and money seems to be in AI in the cloud. Certainly that's what the recruiters talk about. And if that interest weren't there, all the funding would dry up. And that makes winter.

        Where is the explainable AI? What ever happened to Cyc and why is it still there if we don't all know? Bette
  • Is it overhyped? Yes. But it _actually works_ this time. It's sort of like the previous .com boom/bust: last time there was an implosion because not only were .coms overhyped, but e-commerce wasn't viable yet. Nowadays all other commerce is gradually becoming irrelevant. Same with AI: before the last AI winter _not a single goddamn thing_ actually worked. It was 100% straight hype and nothing else. Nowadays you have systems that actually work, and some that even deliver superhuman performance. Moreover, the

    • Re: (Score:3, Insightful)

      by f00zbll ( 526151 )
      I don't believe it is accurate to say "not a single goddamn thing actually worked". That is factually untrue. OCR is one of the products of AI research and it worked fine for many businesses. Inference rule engines are widely used for things like compliance engines for securities. Map directions came out of AI research.

      The problem is some jackasses over-hyped AI and made promises that were totally stupid. Just like there are people hyping AI to silly levels and saying stuff that doesn't make any sense.

      Acade

  • So basically my mockery was prescient.
  • Wintermute

  • "We're not rebuilding, we're reloading."
    — Edmonton Eskimos after five-in-a-row dynasty (several of those with Warren Moon)

    The Eskimos were back to the finals again in three years (granted, it's a small league).

    The thing with the younger Moon was he couldn't lay off the long bomb, so he spent some years perfecting his craft in a QB platoon with the nickel-and-dime specialist Tom Wilkinson. Wilky taught him two important words: patience, grasshopp

  • Why are we even entertaining the BBC on Slashdot? Their tech articles are total crap.

An authority is a person who can tell you more about something than you really care to know.

Working...