Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Transportation Software Technology

Toyota Is Uneasy About the Handoff Between Automated Systems and Drivers (caranddriver.com) 135

schwit1 shares a report from Car and Driver: Toyota has not yet decided whether it will bring a car to market that is capable of automated driving in some situations yet still requires a human driver behind a wheel who can take control if needed -- but the automaker, characteristically, is more cautious than many about moving forward with the technology. Citing safety concerns regarding the handoff between self-driving technology and human driver, Kiyotaka Ise, Toyota's chief safety technology officer, said the biggest issue with these kinds of systems is that "there is a limbo for several seconds between machine and human" in incidents when a car prompts a human to retake control if it cannot handle operations. These kinds of systems, defined as Level 3 autonomy by SAE, have divided automakers and tech companies in their approaches to developing cars for the self-driving future. As opposed to Level 2 systems, like Tesla Motors' Autopilot, in which a human driver is expected to keep his or her eyes and attention on the road while a system conducts most aspects of the driving, Level 3 is characterized by the system's claiming responsibility for the driving task when it is enabled. Although Toyota assures us that its researchers are hard at work figuring out the challenges of Level 3 autonomy, it seems like the company could eventually join others moving directly from its current Level 2 system to a Level 4 system. Given the self-driving race has been on for a while, this could put Toyota at a competitive disadvantage, but it's clear engineers at the company care more about getting things right than they do about being first.
This discussion has been archived. No new comments can be posted.

Toyota Is Uneasy About the Handoff Between Automated Systems and Drivers

Comments Filter:
  • Toyota is... (Score:5, Insightful)

    by Anonymous Coward on Tuesday November 07, 2017 @08:04AM (#55505155)

    The summary states, "it's clear engineers at the company care more about getting things right than they do about being first."

    So, basically what you're saying is, Toyota is the anti-Tesla.

    • Re:Toyota is... (Score:4, Insightful)

      by AmiMoJo ( 196126 ) on Tuesday November 07, 2017 @08:14AM (#55505207) Homepage Journal

      I'm not convinced that Tesla will get to level 5 with their current hardware. They are already selling level 5 to customers as a future firmware update (â3000 extra last time I looked), saying they will upgrade hardware if necessary for people who already paid.

      Their system is only cameras and ultrasonic sensors, no lidar. They are using neural nets for image processing.

      • by Anonymous Coward on Tuesday November 07, 2017 @08:28AM (#55505239)

        Tesla is the industry leader in making promises .

      • by Ost99 ( 101831 )

        No lidar, but they have a forward facing radar.

        8 cameras, 1 radar and 8 ultrasound sensors. Way more than a we have.

        • by AmiMoJo ( 196126 )

          The whole sensor package seems poorly thought out. For example, the lack of a nose camera means that they can't implement a 360 degree overhead view like Nissan and several other manufacturers have now.

          And then there is the whole "AP2.5" debacle, where they did a major computing power upgrade over the original AP2.0 hardware after realizing it wasn't up to the task, and then promised free upgrades to people who had already paid for full self driving a year ago.

          The thing about cameras is that they are not th

          • The sensor package doesn't really matter until someone figures out how to make it work under a layer of ice. People are generally good at brushing off their cars but no one will spend time picking ice off the sensors. Maybe current sensors won't be affected by this, but I know cameras will and if there aren't enough working sensors in the most dangerous time of year it's a big problem.
        • by Rei ( 128717 )

          12 ultrasonic sensors on Model 3.

          That said, what we have that a car doesn't is a brain that automatically fixes photogrammetric stitching errors based on the logic of "that doesn't make sense" - whether something "makes sense" or not being an AI-hard problem. So one tries to compensate for this on vehicles with better sensors than human beings have.

          Lidar provides a superb data stream when conditions are right, but it's too problematic. You're not going to put awkward, draggy, ugly domes on top of everyone's

      • "I'm not convinced that Tesla will get to level 5 with their current hardware."

        I think your analysis is perfectly valid. One thing though. Human drivers basically have two (at most) pretty good optical sensors with no useful range capability (human stereo vision only works out to about 6-7 meters). The sensors can scan right to left through about 180 degrees and up/down through 30-45 degrees. They are augmented by at most three very limited mirrors and maybe on modern cars by a flaky, small screen rear

    • Re:Toyota is... (Score:5, Insightful)

      by mjwx ( 966435 ) on Tuesday November 07, 2017 @08:40AM (#55505273)

      The summary states, "it's clear engineers at the company care more about getting things right than they do about being first."

      So, basically what you're saying is, Toyota is the anti-Tesla.

      Basically you're saying Toyota is being Toyota (conservative to the extreme, but good at what they do).

      Toyota is not the only one concerned with this, as a road user I'm concerned what will happen when Dopey Doris' automated car struggles with faded lines on a single lane road (quite common on my 18 mile commute to work). Right now, Dopey Doris can only spend half her attention on her phone, I hate to think what will happen when she puts her full attention into it and because she's so engrossed in FaceCrush or watching the latest episode of Keeping up the Cardasians that she completely tunes out the alarm throwing control back to the driver and the car veers into my lane uncontrolled.

      Autonomous cars need to be 99.999999999999999% reliable before they should be considered ready for public consumption. Right now, they're nowhere near it. Google's success has been due to two factors, 1. it was all done in sunny California (I'd like to see the same car in Berkshire) and; 2. the car has been in the hands of a professional driver the whole time. The current track record for autonomous cars is nil, the record is for car and driver working together. Of course we know in the real world if you gave the Google autonomous car to Dopey Doris commuting from Finchamstead every day, she's going to assume that it will do everything for her, so we need to make sure it can operate without human intervention because it needs to, human intervention cant be counted upon from the average steering wheel attendant with a phone shoved up their nose (we get enough collisions from these types as it is without giving them a reassurance).

      • Autonomous cars need to be 99.999999999999999% reliable before they should be considered ready for public consumption.

        Since that's essentially an impossible standard, what you're saying is that we should never use autonomous land vehicles, even though human drivers fall massively short of that same safety threshold.

        • I"d say at the least the onus is on them to guarantee that it will never create an accident in a situation that I, as an individual, would be able to deal with. And that should go for anyone buying an automated vehicle.
          • I live in Los Angeles and don't own a car, but I have to rent one to drive ~100-250 miles per day a few times per month. Last week I was twice stuck driving at odd hours and struggling to stay awake and focused on the freeway. While I "powered through" it, having just automatic lane keeping and car following operational would have made the trip much less stressful, and likely improve safety by reducing event risk to ~5% per 100 miles to ~0.01%, on par with a non-fatigued driver.

            So then you get into the que

          • I"d say at the least the onus is on them to guarantee that it will never create an accident in a situation that I, as an individual, would be able to deal with. And that should go for anyone buying an automated vehicle.

            What about all of the situations in which you would have an accident, but the automated system would avoid it? Suppose that there are a thousand of those for every one where you'd succeed and the system would fail. That would fail your requirement as stated. Do you really think that makes sense?

        • Currently, there are about 218 million drivers and in 2016 there were 6,296,000 police-reported motor vehicle traffic crashes. So that's about a 2.8881% crash rate. Just saying.
      • Re:Toyota is... (Score:5, Insightful)

        by CastrTroy ( 595695 ) on Tuesday November 07, 2017 @09:04AM (#55505371)

        This is also a big concern of mine. Cars should either be 100% autonomous, or 0% autonomous. I'm all for adaptive cruise control, but as soon as you introduce technology that allows people to take their attention away from the driving and have it still follow the road for a significant period of time, that's where you run into problems.

        If you haven't had to actually touch the steering wheel for a month, how much would you really be paying attention? What happens when the car screws up and you need to take over? Are you going to be too engrossed if your other activities to take over? Also, what is the point of paying for all this technology anyway if you don't get to actually not pay attention anyway. If you're going to have to keep your hands on the wheel, you might as well actually be driving, because other wise it isn't really worth the expense.

        • I mostly agree. I could see there being room for a car that safely pulls over, or otherwise gives the driver plenty of time to "shift gears" and develop situational awareness before they have to take over in unusual circumstances - but it should *never* require the driver to take over on short notice, which demands that they be constantly maintaining situational awareness against the very low chance that they need to take over. Human attention just doesn't work that way.

          Highway driving is probably a good

        • by AmiMoJo ( 196126 )

          Level 3 is okay. That's where the car can drive itself under certain limited conditions (e.g. on a highway, but not on urban roads) and gives the driver plenty of warning when they need to take over. By "plenty" I mean 60+ seconds, and if you don't take over nothing terrible happens.

        • by antdude ( 79039 )

          But nothing is 100%. Not even 99.99...%. :(

          • By 100%, I mean no steering wheel or the system is good enough that you can sit in the back seat if you so desire. This concept of "good enough but you might have to unexpectedly take over once in a while" isn't really that great of an idea because people simply won't be paying attention if they aren't required to pay attention all the time.

      • by AvitarX ( 172628 )

        I actually think Toyota is wrong here.

        I suspect that advanced level 2 (what we currently have) is the most dangerous. An aggressive alarm with a seconds long handover seems far safer than completely unatentive touching the wheel.

        I think the danger zone starts as soon as you combine adaptive cruise control and lane assist, and that the only safe option is to leave the wheel completely in control of the human until level 3 (which I think is safer than current level 2).

      • Autonomous cars need to be 99.999999999999999% reliable before they should be considered ready for public consumption.

        No, they really do not need to be that reliable. Not even close to that reliable.

        People are not that reliable. People like the Dopey Doris you describe can't keep themselves alive because the phone is so much more interesting than driving is. I absolutely want Doris to have a self-driving car, because even current technology is likely better at driving than she is.

        Even a shitty self-driving car will likely be better than the bottom 25% of human drivers. Unless that makes the middle 50% far worse, it will be

        • Ok as long as people don't have to pay for it with personal injuries or financial loss.
          • Ok as long as people don't have to pay for it with personal injuries or financial loss.

            They will have to pay. But it will be less than they're paying now.

            • They will pay less on average perhaps, but some people who were unlucky enough to have their car confuse a truck for a bridge will end up paying more than they otherwise would and that is just plain wrong. Their fault if they trust the technology I guess.
      • Autonomous cars need to be 99.999999999999999% reliable before they should be considered ready for public consumption.

        Noting that nothing else is that reliable. Not people, not Verizon, not even condoms - but use the last one with the first two anyway.

    • by zifn4b ( 1040588 )

      The summary states, "it's clear engineers at the company care more about getting things right than they do about being first."

      So, basically what you're saying is, Toyota is the anti-Tesla.

      No, Toyota is being responsible instead of going for a quick short-term money grab due to hype.

    • Re:Toyota is... (Score:5, Interesting)

      by PolygamousRanchKid ( 1290638 ) on Tuesday November 07, 2017 @09:01AM (#55505361)

      The Duke of Wellington claimed he won the Battle of Waterloo against Napoleon on "the playing fields of Eton".

      The big battle for autonomous driving will be won or lost in the tort courts of the US. Who is responsible for the accident? The driver? Or the manufacturer?

      Your local ambulance chaser lawyer would prefer to sue the manufacturer . . . simply because the manufacturer has more money!

      The first big cases will unsettle the industry, but a sort of fudge agreement will be reached between lawyer groups, the manufacturers and the insurance companies. Unfortunately, the average driver will end up paying for this.

      The lawyers don't want to kill the autonomous car industry . . . they want to "milk" it for their "piece of the action".

      • You raise an important point, but as soon as autonomous land vehicles exceed the miles driven per accident that humans are capable of, insurance companies will line up in favor of paying for fewer wrecks.

        Not to put too fine a point on it, but the average driver already pays for this under the mandatory auto insurance laws.

      • Last I heard the auto manufacturers were pretty unanimous that they would be responsible for any accidents that occur while the car is in self-driving mode, so there's not really a whole lot of conflict to be resolved. It's a simple product-safety liability situation - if you're using a product in full accordance with the manufacturer's instructions, and it injures or kills you anyway, then it's a pretty open-and-shut case of liability for a faulty product.

        The only "loophole" I've seen so far is for "semi

    • The summary states, "it's clear engineers at the company care more about getting things right than they do about being first."

      So, basically what you're saying is, Toyota is the anti-Tesla.

      Perhaps what's being said is that Toyota has had issues in the past with software and is trying to be careful about that.

      See: Toyota's killer firmware: Bad design and its consequences [edn.com]

      Barr's ultimate conclusions were that:

      • Toyota’s electronic throttle control system (ETCS) source code is of unreasonable quality.
      • Toyota’s source code is defective and contains bugs, including bugs that can cause unintended acceleration (UA).
      • Code-quality metrics predict presence of additional bugs.
      • Toyota’s fail safes are defective and inadequate (referring to them as a “house of cards” safety architecture).
      • Misbehaviors of Toyota’s ETCS are a cause of UA.
  • Slip ups on the road can become fatal in seconds because of the speeds and forces involved. You know people are going to rely on these systems precisely when they should be off the road period or be paying attention. And what happens when a deer decides to bolt out from the woods in front of your vehicle? Are you going to trust that the car can detect a deer?

    This seems like a solution to people who hate the idea of mass transit and transporting goods by trains. Self-driving cars and trucks and hyperloops! FFS, just hire Disney's engineers and building a fucking monorail in most cities and connect them to the suburbs. That would be more than sufficient to raise the quality of life on transit.

    • by zifn4b ( 1040588 )

      This seems like a solution to people who hate the idea of mass transit and transporting goods by trains. Self-driving cars and trucks and hyperloops!

      I really hope this is not the primary driver behind this technology. If this is true, the idea is to eliminate the cost of CDL drivers and making the most dangerous vehicles on the road automatically driven. Why couldn't they just have a series of underground tunnels specifically for transporting commercial goods where all the automatic transport vehicles could be driven? That way, if an "error" occurs, it doesn't injure anyone? It's probably because it costs too much to do that I'm guessing... (facepal

    • And what happens when a deer decides to bolt out from the woods in front of your vehicle? Are you going to trust that the car can detect a deer?

      Actually, yes. I trust a computer system that can handle thousands of computations every second much more than I trust a startled, panicky human in that scenario. I don't know if you've ever hit a deer with your car, but they don't exactly give you much warning, regardless of how much you're paying attention to the road.

      Source: Lives in Pennsylvania

    • by Kiuas ( 1084567 )

      And what happens when a deer decides to bolt out from the woods in front of your vehicle? Are you going to trust that the car can detect a deer?

      Am I going to trust that a computerized system is capable of detecting an object in front of the car an apply brakes faster than a human being?

      Hell yes I am. The human reaction time is around a second (often considerably more if the driver's distracted or tired). By the time your brain has gone 'oh shit a deer", and decided to slam the breaks, the computer is alread

      • by Kiuas ( 1084567 )

        Replying to myself because I accidentally copied the wrong link which is to a clip of the said talk and not the whole of it. Here's the original TED talk [youtube.com]. The part about the lady & ducks is slightly after 11:10.

      • "Humans are on average really bad drivers"

        No, we're not. We're actually bloody good at it considering we never evolved to drive something weighing 1.5 tons at anything up to 10 times our maximum running speed alongside other vehicles doing similar speeds.

        People cite accident statistics as if they're significant. When you consider the TRILLIONS of miles driven every year by the worlds drivers and the number of potential accidents that DIDN'T happen because drivers reacted properly, the actual number of accid

        • ..which are also operating in a controlled airspace with much less happening from second to second than on the ground. Also monitored by air traffic control.
          • And the pilots have trained for that exact scenario, and have drilled on it, and have studied the common causes of that. Yeah, the comparisons to a car are not really useful here.

            • by Viol8 ( 599362 )

              I think you rather missed the point - the computers in aircraft have an easier time than would computers in a completely automated car, yet they still have to hand back control to the pilots from time to time.

    • by hipp5 ( 1635263 )

      And what happens when a deer decides to bolt out from the woods in front of your vehicle? Are you going to trust that the car can detect a deer?

      I'm going to trust the car to detect the deer a lot more than I trust myself to detect a deer. I only have two eyes and I can only focus them on one point at a time. They also only see in the visible spectrum. My car could conceivably have 360 degree, multi-spectrum "vision".

    • by be951 ( 772934 )

      And what happens when a deer decides to bolt out from the woods in front of your vehicle?

      Most likely, the computer system detects it more quickly than a person would, reacts faster than humanly possible, and brakes and/or steers in an optimized way to avoid both collision and loss of control.That kind of scenario is one where automated systems easily beat humans. The concerning ones are when visibility is poor, or lane markings are bad or confusing, such as in inclement weather or construction zones.

      just hire Disney's engineers and building a fucking monorail in most cities and connect them to the suburbs

      A train line is great if all the places you want to go are in a nice line. And we could certain

    • Depends on how you look at the efficiency of the solution: in terms of time all these Elon musings are more efficient; in terms of resource utilization, mass/rapid transit is more efficient. The caveat is you need walkable communities on either end for mass transit to work, and Elon isn't too close to a metro station.

  • by Anonymous Coward

    Someone is actually taking time to think this through. Don't get me wrong. I'm ready for our self driving car overlords. I just want to make sure they are ready for the job first.

  • by GWBasic ( 900357 ) <slashdot@NospaM.andrewrondeau.com> on Tuesday November 07, 2017 @08:38AM (#55505269) Homepage
    I'm a little skeptical of a sudden mass takeover with autonomous driving. As this post implies, the risk is huge. Where are autonomous devices in low-risk situations? Why haven't they taken over? I think we're better off with things like dryers that can sort and fold laundry, or dishwashers that can put the dishes away. The risk of a dropped dish or torn shirt is much more tolerable than a car crash at highway speeds.
    • by hipp5 ( 1635263 )

      The risk of a dropped dish or torn shirt is much more tolerable than a car crash at highway speeds.

      But the rewards are also so much lower.

      The potential rewards of autonomous driving are HUGE. I won't pay more than a couple hundred bucks to have my dryer fold laundry. But I would pay a lot to not have have society ever have to face drunk drivers again. To give old people the freedom to get out and about. To have cars that can precision park themselves, thereby taking up way less parking area (no door opening space needed). And so on.

    • by atrex ( 4811433 )
      Fabrics are actually very difficult for robotics to deal with because it bunches, snags, etc. It's why the manufacture of clothing is still mostly done in sweatshops instead of being completely automated.

      A dishwasher could be automated, but you'd be talking about having it integrated into a cabinet system and having grippers on slide tracks trying to grab non-metallic, non-magnetic plates and glasses with just the right amount of force not to break them and organize them throughout the cabinet. Any pote
    • by Leuf ( 918654 )
      The thing is people are really quite bad at driving. People who are bad at folding laundry just have wrinkled clothes. People who are bad at driving kill people. "According to the World Health Organization, road traffic injuries caused an estimated 1.25 million deaths worldwide in the year 2010." How many technological advances have the potential to save a million lives per year?
      • by GWBasic ( 900357 )
        There is absolutely no proof that a computer can drive a car better, and safer, than a human. None, whatsoever. Until the proof exists, I'd rather risk something that's cheap to replace instead of my life.
  • I hope your loved ones don't get harmed by this newfound laziness hiding behind flashy, imperfect technology.
    • by zifn4b ( 1040588 )

      I hope your loved ones don't get harmed by this newfound laziness hiding behind flashy, imperfect technology.

      This is why I work from home. :) It's also better for the environment...

      • >This is why I work from home.

        As someone who has had an (abysmally bad) driver determine that their home was a valid roadway... I question the value of your choice.

        • by zifn4b ( 1040588 )

          >This is why I work from home.

          As someone who has had an (abysmally bad) driver determine that their home was a valid roadway... I question the value of your choice.

          I'm not sure I follow your logic there, it seems to be nonsense. First of all, if you want to objectively make a claim the my choice of working from home is less valuable than commuting, then by all means lay out a claim and supporting logic and evidence for the claim. If your claim is subjective, meaning that you would not find my choice valuable, then we just agree to disagree because what is good for you is not necessarily good for me and vice versa but we should respect each other.

          • >I'm not sure I follow your logic there, it seems to be nonsense

            I'm not sure I follow the motivation behind your post, but I'm guessing you have no sense of humour and a stick lodged up your backside.

  • by DrTJ ( 4014489 ) on Tuesday November 07, 2017 @08:56AM (#55505337)

    Toyota is not the only one deliberating skipping L3 and go directly to L4. Volvo intends to to the same, as well as some of the German vendors.

    The reason is that studies show that hand-overs do not only take "a few seconds" according to the article, but that there is a tail of up to 40 seconds before the "driver-to-be" comprehends the situation.

    Since 40 seconds is an eternity in traffic, it poses essentially the same challenges as L4 systems. So why bother with L3?

    • by Anonymous Coward

      Bingo. Even highly-trained airline pilots have proven unable to deal with a handover from the autopilot to the pilots without flying a perfectly good airliner into the sea. So expecting the average driver to do it when they don't have minutes to react is crazy.

      What makes it doubly bad is that the computer will only hand over when it runs into a situation it can't handle, which means the human will only be expected to take over when the car is in a complex and dangerous situation to begin with.

    • Toyota is not the only one deliberating skipping L3 and go directly to L4. Volvo intends to to the same, as well as some of the German vendors.

      Google (now, Waymo) also decided years ago that L3 is a bad idea.

    • by sl3xd ( 111641 )

      Even with the semi-autonomous systems in my car (adaptive cruise control and lane guidance), hand-off is a big problem. Either the car decides to slam on the brakes when a car starts to pull off the road (slows down and switches lanes), or it accelerates with reckless abandon because it can't sense the stopped cars 300+ feet ahead. (And it won't slow down when it does detect the cars ahead because the speed delta is now greater than 30 MPH)

      Or it starts tugging at the wheel to guide my car back into the cent

  • by Anonymous Coward

    Expecting the human to take over in a panic situation IS unsafe. The human should only be taking over while parked.

    There is exactly one class of people who have the training qualifying them to take over a driving car: Driving instructors. And they are usually limited to stepping on the brake, something the autonomous car could easily do on its own in a panic situation.

    However, that's not saying that we have to go straight from level zero to level five. We just have to do it in a different way.

    Rather than le

    • Expecting the human to take over in a panic situation IS unsafe. The human should only be taking over while parked.

      I agree completely. Doing a handoff of a moving car is just asking for trouble.

      Rather than letting the car drive on the straight road, and expecting the human to take over in case the car overlooks a pedestrian, we should be letting the human drive on the straight road, and let the car take over when the human overlooks a pedestrian.

      There is a 3rd solution which I prefer. Let the computer drive on straight roads in good weather on limited access highways first. i.e. the boring stuff. If the weather starts to turn bad or you are approaching a city, have the computer pull over to the side of the road so that you can switch driver. This is already common practice. Growing up on vacations, my mom would help drive on the long stretches and then pull over an

  • Whenever you have to hand control of the vehicle back to the human, there is going to be a delay. This is absolutely unavoidable and potentially very dangerous.

    The driver, who was presumably inattentive during the fully-automated drive, will have to assess the surroundings and respond. This makes existence of a SAE Level 3 car inherently unsafe---there is little empirical support for idea that we can have a safe sometimes-automated system that fails into manual control.

    Human attention change, perception tim

  • Considering Toyota couldn't write software for brakes I'm not surprised they're uneasy about writing autonomous driving software.

  • So, after a while with cars driving themselves, what exactly will the turnover to the now rusty human do but insure a crash? Also, there is not instantaneous human situational awareness. What...wait...oh, THAT is happening, so I must.....crash. Driving is a never ending story. You must pay attention until you turn the key to off. Also, the system is perfect, or it is not. Everything degrades and eventually needs repair. And how dangerous is a degrading self-driving-car system? And, since they cannot legally

  • Until we have REAL AI, self-aware, capable of actual thought and real interaction with humans, and not the current dead-end approach ('learning algorithms', 'expert systems', etc) we will not have truly safe, effective self-driving cars -- and having these half-assed systems fully in control of a vehicle, with no possibility of humans taking control, we will have death and disaster. At best we'll have totally frustrating vehicles that stop for apparently no reason, while it 'phones home' so a human operator
    • by sl3xd ( 111641 )

      Until we have REAL AI, self-aware, capable of actual thought and real interaction with humans

      We don't even know what "self-aware" is, to say nothing of "actual thought". How exactly is it defined? A sea cucumber is aware enough of itself to try to preserve its existence, and yet they have no brain. Is that "self aware?" They're not bumping around in the dark, speeding off in random directions, randomly eating, or mating with whatever they touch. They can process input from their sensory organs to find food, and are able to communicate for the sake or reproduction. Is that thought?

      How is a sea cucum

      • We don't even know what "self-aware" is, to say nothing of "actual thought". How exactly is it defined?

        ..and THAT is why we can't create REAL AI; we have NO IDEA how things actually work, and you are taking for granted how complex a task driving is, which is why you need a mind that can actually THINK.

Every nonzero finite dimensional inner product space has an orthonormal basis. It makes sense, when you don't think about it.

Working...