Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Transportation United States

New York City Welcomes Robotaxis - But Only With Safety Drivers (theverge.com) 34

An anonymous reader shares a report: New York City announced a new permitting system for companies interested in testing autonomous vehicles on its roads, including a requirement that a human safety driver sit behind the steering wheel at all times. As cities like San Francisco continue to grapple with the problems posed by fully driverless for-hire vehicles, New York City is trying to get ahead of the problem by outlining what it calls "a rigorous permitting program" that it claims will ensure applicants are "ready to test their technology in the country's most challenging urban environment safely and proficiently."

"This technology is coming whether we like it or not," Mayor Eric Adams said in a statement to The Verge, "so we're going to make sure that we get it right." The requirements would exclude companies without previous autonomous vehicle testing experience in other cities. Applicants would need to submit information from previous tests, including details on any crashes that occurred and how often safety drivers have to take control of the vehicle (also known in California as "disengagements"). And in what is sure to be the most controversial provision, fully driverless vehicles won't be permitted to test on the city's public roads; only vehicles with safety drivers will be allowed.

This discussion has been archived. No new comments can be posted.

New York City Welcomes Robotaxis - But Only With Safety Drivers

Comments Filter:
  • by Dusanyu ( 675778 ) on Thursday March 28, 2024 @01:53PM (#64351859)
    These technologies are still less than perfect and having a Human babysitter for it feels like a wise choice.
    • by Firethorn ( 177587 ) on Thursday March 28, 2024 @02:19PM (#64351925) Homepage Journal

      That's a complex question though. As the one self-driving fatality shows, it's hard for a human safety driver to keep paying enough attention to prevent tragedy if the self-driving system messes up. After a point, they're pretty useless, and immediately removes most of the benefit of a self-driving system, the relatively very expensive driver.

      That said, such systems having to prove themselves in NYC before they can drop the drivers, that makes perfect sense.

      • Are these safety drivers required to have a NYC taxi cab medallion [wikipedia.org]?
        If not, it seems they're not competing on a level playing field.
      • it's hard for a human safety driver to keep paying enough attention to prevent tragedy if the self-driving system messes up./

        If they can't be bothered to pay attention, how to do they drive their own car? Their job is to always be alert for when, not if, the software freaks out. If they can't do that, they shouldn't be in the program.

        Hans Kristian Graebener = StoneToss
        • it's hard for a human safety driver to keep paying enough attention to prevent tragedy if the self-driving system messes up./ If they can't be bothered to pay attention, how to do they drive their own car? Their job is to always be alert for when, not if, the software freaks out. If they can't do that, they shouldn't be in the program.

          It's easy to pay attention when you're actually driving. Much harder to continue to pay attention when you're literally just sitting and watching some system drive the car, trying to be more alert than the computer. Passive != active. Like, at all. It's the difference between being a driver and a passenger. The only way of overcoming that I can think of is switch over every so many minutes to keep the safety driver alert, but I'm not sure what effect that would have on the stats they're trying to gather.

          • It's easy to pay attention when you're actually driving. Much harder to continue to pay attention when you're literally just sitting and watching some system drive the car, trying to be more alert than the computer.

            Yes and no.

            If the car can freak out and do something insane at any moment, then definitely yes. It's very hard to maintain high alertness and readiness all the time, and the less frequently you have to react the harder it is.

            But in my experience with one (admittedly inferior) self-driving system, Tesla's, that's not how it works. That's also not how it works with novice human drivers. Or experienced human drivers operating on unfamiliar roads. Instead, with Tesla FSD or with a novice human driver, there

            • On the other hand, I know people who are extremely good backseat drivers, able to pay close attention to the driver and everything around the vehicle for hours on end, even when the driver is quite good and never actually needs input from the backseat driver. I think those people could make great safety drivers.

              I tend to think those type of folks just have serious enough trust issues that they can't unclench their nerves long enough to let go. I'm more relaxed driving than some of these folks are when they're sitting behind the driver.

              • On the other hand, I know people who are extremely good backseat drivers, able to pay close attention to the driver and everything around the vehicle for hours on end, even when the driver is quite good and never actually needs input from the backseat driver. I think those people could make great safety drivers.

                I tend to think those type of folks just have serious enough trust issues that they can't unclench their nerves long enough to let go. I'm more relaxed driving than some of these folks are when they're sitting behind the driver.

                Yet, oddly, if you offer to let *them* drive, they often refuse...

                In any case, that seems like the kind of person you want as a safety driver, as long as you can convince them not to intervene constantly.

                • On the other hand, I know people who are extremely good backseat drivers, able to pay close attention to the driver and everything around the vehicle for hours on end, even when the driver is quite good and never actually needs input from the backseat driver. I think those people could make great safety drivers.

                  I tend to think those type of folks just have serious enough trust issues that they can't unclench their nerves long enough to let go. I'm more relaxed driving than some of these folks are when they're sitting behind the driver.

                  Yet, oddly, if you offer to let *them* drive, they often refuse...

                  In any case, that seems like the kind of person you want as a safety driver, as long as you can convince them not to intervene constantly.

                  Paranoid hypertension tends to not be a good frame of mind for oversight to start from, but they would stay alert. Their nerves would keep them from slipping into boredom.

        • It's a difficult problem. It turns out it is practically impossible to maintain alertness, focus, readiness to take over driving at a split second, over extended periods. They aren't mimicking the actions of driving before the moment of takeover, they aren't engaged in "driving", just "observing". Even if you hire athletes and pay them exorbitant amounts to perform the service (and they are doing neither), they are going to fail at anything requiring split-second judgement and reflexes. Tack on the conditio

        • If they can't be bothered to pay attention, how to do they drive their own car?

          The difference between active interaction and inactive observation. Humans suck at the latter. When you're the one driving, you're making decisions every second, it keeps you involved.

          With software freakout, they not only need to be paying attention, they have to be paying attention closely enough to catch the freakout, plan and implement a corrective action fast enough to avoid an accident. That's tough.

    • A Human babysitter for it feels like a wise choice.

      This is a perfect example of the worst possible task for a human. Hours and hours of boredom and nothing happening followed by a few seconds where absolute attention is needed immediately. That's just not the way humans work well. It would be much better to just disconnect the self driving software and allow the babysitter to actually drive the car so that when something goes wrong they are fully aware of and connected to the system.

      • Hours and hours of boredom and nothing happening followed by a few seconds where absolute attention is needed immediately.

        Well, then this is going to blow your mind: This is exactly what Tesla expects of everyone using their "self-driving' vehicles. I kid you not.

        • Well, then this is going to blow your mind: This is exactly what Tesla expects of everyone using their "self-driving' vehicles. I kid you not.

          Yeah, I think we've had a number of stories on here about how that makes Teslas death traps and how they get their safety numbers by comparing self driving on motorways ("highways" for the Britishally challenged) with human drivers in all circumstances. I mean basically Musk is saying exactly "you might die, but it'll be your fault, not mine". He seems a bit of a nut case TBH.

  • Someone can sleep in the front seat as it inches down the Ave of Americas. If it crashes a poor now homeless person is not a good target of a lawsuit.
  • They'll be programmed to move in and out of the congestion price zone, at fifteen bucks a pop. Cha-ching!
  • I would never take such a job, as I don't believe there's a human alive that can recognize when the robot is NOT going to apply the brakes and NOT run down the little old lady with the walker. It's hard enough to do it when you know you are the only one that can act, but adding the analysis time necessary to realize that the automation is screwing up is, I believe, an impossible task.

    I think it will turn out that the human in the loop will be good for only having someone to blame and send to prison for fa

    • The only way it works is if you let the human drive while the AI judges, then review the AI judgments after the fact.

      I don't understand why this wasn't the default system in the first place.

      • They literally do that, it's just that they do that early in the AI development; if they are testing on the streets of NYC, they are past that part, and at the point where they need to now let the AI drive.

        • You have more faith in the systems than I do. I don't think we're anywhere near something that can replace a decent human driver... 'self-driving' is being pushed on us because nobody wants to pay a taxi or truck driver.

  • If the tech is ever developed to the point where it's safe to eliminate safety drivers, the requirement should be dropped

  • Reminds me of high school when I took Driver's Education training. They had a special car with an extra brake pedal on the passenger side.

    Imagine that scenario, but the student driver is driving the vehicle via remote control with no two-way communication. Is that even feasible? I'm sure those instructors have a good eye for when to hit that brake, but they depend on feedback from the student driver including body language. Without that, it's going to be orders of magnitude harder to monitor.

    • Reminds me of high school when I took Driver's Education training. They had a special car with an extra brake pedal on the passenger side.

      Imagine that scenario, but the student driver is driving the vehicle via remote control with no two-way communication. Is that even feasible? I'm sure those instructors have a good eye for when to hit that brake, but they depend on feedback from the student driver including body language. Without that, it's going to be orders of magnitude harder to monitor.

      I'd say the situation is actually even worse than you think. Good driving instructors are always interested in what their student is doing and feel they can improve it with advice and instructions, even before they begin to think about touching the controls. That added to the fact their experience has probably taught them that they are riding along with a person who's actively trying to kill them and makes several attempts a day makes them much more likely to pay attention. I'm sure that the safety drivers

  • "And the safety drivers must earn as much as taxi dr...uhhhn...taxi token owners who rent to poor taxie drivers."

    "Ah, hell. We corrupt politicians are really playing a gymnast's game of Twister, right foot don't drive AI from state! Left foot toss something to entrenched taxi interests we are corrupt for and beholden to! Right hand don't piss off voters! Left hand launch talking heads about race to the bottom!

    Before modding me down, realize your thoughts, so-called, fall under one or more limbs.

  • And in what is sure to be the most controversial provision, fully driverless vehicles won't be permitted to test on the city's public roads; only vehicles with safety drivers will be allowed.

    How is having a safety driver in an UNTESTED VEHICLE controversial? Isn't the whole point of the 'testing' phase is to make sure they work first BEFORE unleashing them on the streets unmanned? In fact, it would be STUPID to not require safety drivers. Duh.

    • Money. It's controversial because investors will have to spend some extra money to keep people safe. How can you maximize shareholder value if you can't externalize the costs onto random pedestrians? /s
  • "This technology is coming whether we like it or not," is his asserted conclusion. NYC sells taxi medallions and has every authority to control the industry.

  • It's one thing to be driving around all day but another to basically just sit there and look around not actively doing anything. I'm pretty sure were going to have drivers dozing off after a while.

It is not best to swap horses while crossing the river. -- Abraham Lincoln

Working...