Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Technology

Face-Scanning Loses by a Nose in Palm Beach 232

Rio writes: "A story from myCFnow.com reports that Palm Beach International Airport officials said face-scanning technology will not become part of their airport's security system." Looks like the ACLU was right. Checking a database of 15 employees, the technology gave false-negatives -- failed to recognize the test subjects -- over 50% of the time. A spokesperson said, "There's room for improvement." The Pentagon said the same thing in February. The false-positive rate is more important -- it isn't mentioned, but even if it were just 0.1%, Bruce Schneier argues, it'd be useless.
This discussion has been archived. No new comments can be posted.

Face-Scanning Loses by a Nose in Palm Beach

Comments Filter:
  • by chriso11 ( 254041 ) on Monday May 27, 2002 @12:06AM (#3589478) Journal
    Perhaps this is why I can't remember anyone's name - half the people look the same
  • only 15 employees? (Score:2, Insightful)

    by Atrax ( 249401 )
    a bit of a small sample, don't you agree? and how was it composed...?
  • by serps ( 517783 ) on Monday May 27, 2002 @12:08AM (#3589482) Homepage

    Airport face identification isn't practical? Try telling the Australian Government that. [news.com.au] They are trialling a hybrid face-recognition/biometric passport system that sends shivers up my spine.

  • what does this do (Score:4, Interesting)

    by vectus ( 193351 ) on Monday May 27, 2002 @12:09AM (#3589485)
    but delay its deployment for a couple years? this isn't really a victory at all.. I mean, I bet this will only delay the technology two years.. maybe less.

    If anything, it should be a call for all Americans to protest this kind of thing (should you disagree with it).
  • i think that we should make all pregnancy tests have a 50% false positive rate also. that would be almost as fun as getting thrown into a cell and being denied legal counsel for a few days.

  • Human oversight (Score:2, Informative)

    by enjo13 ( 444114 )
    I think a 0.01% false positive rate would be perfectly acceptable. I have not seen one proposal for a face scanning system that has not also included human oversight.

    Its exactly the systems Casino's have sucessfully deployed to keep known "cheaters" out of their casino's. The face scanning technology merely provides POSSIBLE matches, the actual decision on further investigation rests with a human operator...

    This seems perfectly reasonable to me from a technology standpoint, I'll argue the ethics of this technology some other time:)
    • While it sounds good that a human operator makes the final decision that is not how governments like to do things. In the end they like technology because technology can be quantified and make for perty statistics with pictures and charts and all those things people can show easily...
    • Its exactly the systems Casino's have sucessfully deployed to keep known "cheaters" out of their casino's. The face scanning technology merely provides POSSIBLE matches, the actual decision on further investigation rests with a human operator...
      An excellent example. To a casino, a "cheater" is someone who has figured out how to win more than the 48.5% that the casino has defined as "fair", and therefore puts the casino's 90% gross profit margin in jepordy. As a general rule he is doing nothing that can be considered 'wrong' or even 'illegal' - he is just doing better than the casino would prefer. So massive amounts of technology are deployed to exclude him from the casino premises.

      Yep, we need a national ID card. We really do...

      sPh

  • by I Want GNU! ( 556631 ) on Monday May 27, 2002 @12:14AM (#3589500) Homepage
    Not using faulty technology is a great idea! Now all they need to do is repeal the law taking away school or library aid if they don't use filter technology, since the filters don't have open lists of sites and often block sites they shouldn't and don't block sites they should!
    • I'm quite pleased that the Multnomah County library system has been fighting this - they offer filtering software, but it's optional, because they realize that while it blocks some legitimate material and fails to block some porn, it also does block a lot of stuff that people don't intend to be looking at.

      (I don't live in Multnomah County, but do pass through it every day on my way from home in Clackamas County to work in Washington County. Yes, my commute sucks.)

      More info here. [epic.org]
  • False positive rate (Score:3, Interesting)

    by Triskaidekaphobia ( 580254 ) on Monday May 27, 2002 @12:16AM (#3589502)
    A similar system in Florida [nando.net] (not an airport, but probably a vaguely-similar number of people) had 14 false positives in the first 4 days of operation.
    (Two of the false positives even got the sex of the suspect wrong)

    Since they state that it was the first days, perhaps it just needed tuning?
  • by Papineau ( 527159 )
    Bruce talks about 99.9%, so there's 0.1% left, not 0.01% as the story says right now.

    If a person is mistaken once for a terrorist (or a "normal" criminal), don't you think other recognition points will do the same mistake? What do you do then? Plan a few extra hours each time you take the plane? Get an official letter stating "I'm not a terrorist"? If a simple letter can get you through, terrorists will get some.
    • If a person is mistaken once for a terrorist (or a "normal" criminal), don't you think other recognition points will do the same mistake? What do you do then? Plan a few extra hours each time you take the plane?
      I presume that the system would also have a photo of the suspect, so that a human could compare the photo to the person the system flagged as a possible terrorist. If you happen to look almost exactly like a person in the database, so that even humans mistake you for him/her, then it'll like if you looked almost exactly like someone on the FBI's "most wanted list"; it sucks, but, well, what can you do?
      • if you looked almost exactly like someone on the FBI's "most wanted list"; it sucks, but, well, what can you do?

        What can you do? How about:
        1. Never take another plane ride again.
        2. Never take another train ride again.
        3. Try to avoid driving on the highway at all costs, so that you don't get pulled over and thrown in jail.
        4. Pray every morning when you get up that your neighbor doesn't report you to the Feds, so that they bust down your door with a no-knock warrant and shoot you dead on the spot.

        In other words, just give up any chance of ever living without fear again.

        I sincerely hope you're just being a troll, because if facial recognition were ever to be widely implemented, the above would be a way of life for tens of thousands of perfectly law-abiding citizens in this country, or whever else it was implemented.

        If you really don't think it matters, I'll tell you what: send me a couple of photographs of yourself, in the classic mug shot poses, and within a week I'll have you in that wonderful little FBI database with nice little TERRORIST notes all over your file (all it takes is unsubstantiated rumors these days.) Then we'll see how much you enjoy traveling...
        • if you looked almost exactly like someone on the FBI's "most wanted list"; it sucks, but, well, what can you do?
          What can you do? How about:
          1. Never take another plane ride again.
          2. Never take another train ride again.
          So should the FBI not put out a "most wanted list", because there might be someone out there that really looks like a dangerous criminal? Even if photos are only given to the police, what if a police officer sees a look-alike and gets trigger happy? Should information about dangerous criminals just not be distributed, because it might have bad affects on possible look-alikes?
    • by AftanGustur ( 7715 ) on Monday May 27, 2002 @03:30AM (#3589918) Homepage


      Bruce talks about 99.9%, so there's 0.1% left, not 0.01% as the story says right now.

      No, sorry, just read Bruce's Cryptogram [counterpane.com]


      Suppose this magically effective face-recognition software is 99.99 percent accurate. That is, if someone is a terrorist, there is a 99.99 percent chance that the software indicates "terrorist," and if someone is not a terrorist, there is a 99.99 percent chance that the software indicates "non-terrorist." Assume that one in ten million flyers, on average, is a terrorist. Is the software any good?

      No. The software will generate 1000 false alarms for every one real terrorist. And every false alarm still means that all the security people go through all of their security procedures. Because the population of non-terrorists is so much larger than the number of terrorists, the test is useless. This result is counterintuitive and surprising, but it is correct. The false alarms in this kind of system render it mostly useless. It's "The Boy Who Cried Wolf" increased 1000-fold.

      • All well and good, but that assumes that if the software throws up a positive you will immediately be arrested, and possibly executed without trial by some sort of ED-209 lookalike.

        What a sensible system would do would be notify another system which is better at identifying faces - we call it a human being. Then they can check, and where appropriate take further steps. Which may or may not involve having you arrested and executed without trial.

        What it means is that instead of having to check ten million flyers, the security people have to check 1000, which is far more feasible. I'd argue that false positives are a lot less harmful than false negatives in such systems, provided positive is treated as "take a closer look" rather than "terminate with extreme prejudice"...
  • What is so retarded about these supposedly security engendering technologies is they can only catch someone (if they work at all) if they are in the database. This stops absolutely zero sleepers from commiting some act of terrorism which is exactly what the terrorists in September were. The only way they would have possibly been prevented from boarding those planes was if there was some ultralarge database that collected all the information from all possible channels and picked them out of the crowd for having expired student visas. Even then it isn't terribly likely they would have been prevented from boarding the planes, they're paying customers who will get something in the mail from the INS warning them their visas are expired.
    • The september 11 terrorists weren't "sleepers"

      Atta (the scary looking ringleader) had previously been arrested in Israel for being a terrorist. He was relesed as part of Bill Clinton's mideast "peace" initiative, but was still on various US gov't list of terrorists.

      If the INS wasn't totally useless, if the FBI, FTC etc. shared information, they would have been deported when they were caught being here illegally, driving with an expired licesne, failing to show up for court, or buying airline tickets.

      Tom Daschle and the democrats want to blame George Bush because the FBI and CIA, in hinsight, had the information to see this coming.

      The real tragedy is that they, and thousands of others, were here illegally, and we did nothing.

      • Not all were sleepers but not all were well known terrorists, I was just generalizing. I probably should have said many of the terrorists in September were sleepers. They had no records (at least in the US) to speak of. There was indeed a communication breakdown in the security structure, the INS should have been doing its job and so should the FBI. That is a case where some united database of offenders might someday when the technology is foolproof would catch someone. That day is long off however.

        But the true sleepers involved in September's attacks as well as people we have no idea about are the sort that will pass through the cracks of any sort of database system. Joe Terrorist moves to the US or just gets a visa to live here and goes about his business and never gets so much as a speeding ticket. Then one day an e-mail turns up talking about enlarging his penis and has a particular picture on it and the appropriate code words. He then builds a bomb and blows somebody up. Other than a group of telepaths or time traveling cops how are you going to screen people coming into the US to see if they are a terrorist deep down? Facial recognition is just going to show that Joe Terrorist has no criminal record to speak of and pays his taxes. It isn't going to tell you his backpack has twenty pounds of home made explosives.
    • Nope, not that. With a face recognition program you have these (partial list of) benefits:

      1) If you travel with a false identity you HAVE A PROBLEM :)
      2) If you are not registered, you HAVE A PROBLEM :)
      3) Whoever you are and whatever identity you are using, they can trace all the locations you've traveled. You are the same guy everywhere. You passport is your body.

      I guess i'l be investing in those biomasks companies soon :0
    • Well duh. No system is going to be able to recognise you as a terrorist if you're not already known as one (unless you want to make offensive and largely useless generalisations about skin tone and close set eyes...)

      That doesn't stop face recognition from having potential in areas where some people *are* in the database, including not only terrorism but also missing persons, wanted (non-terrorism) criminals, football hooliganism etc.
  • Unpopular View (Score:5, Insightful)

    by deebaine ( 218719 ) on Monday May 27, 2002 @12:19AM (#3589517) Journal
    I don't necessarily understand the objections to face scanning technology. To be sure, I don't want computers flagging people to be arrested. But computers sift through enormous amounts of information, making them ideal for a first pass. If they are used to flag people to be scrutinized by humans, I don't have any objections. In fact, if a computer can flag 20 of the hundreds of thousands of faces so that human experts can give a closer look, so much the better.

    Incidentally, by this reasoning, it is in fact the false negatives that are more important. False positives can presumably be discarded by humans providing closer scrutiny. False negatives in this scenario, however, present a major difficulty.

    Face scanning technology isn't innately evil. Like everything else, if we use it wisely, it can help. If we use it irresponsibly, it can hurt. No surprises there.

    -db
    • Re:Unpopular View (Score:3, Insightful)

      by h0rus ( 451357 )
      Yes, and the reason for this tension is we expect these methods to be misused.
    • Re:Unpopular View (Score:5, Insightful)

      by achurch ( 201270 ) on Monday May 27, 2002 @12:53AM (#3589618) Homepage

      I don't necessarily understand the objections to face scanning technology. [...] Like everything else, if we use it wisely, it can help. If we use it irresponsibly, it can hurt.

      You just hit the nail on the head there; most people who don't like this technology don't like it because (they believe) it will be used irresponsibly, eventually if not immediately. Power corrupts, as the old saying goes, and people are unfortunately easily corruptible. Ordinarily I wouldn't be quite so pessimistic, but given all the hoopla over the "War on Terrorism", I'm inclined to side with the Slashdot popular view.

      (Note to moderators: Yes, I do realize that there are many points of view represented on Slashdot, thankyouverymuch.)

    • You're absolutely correct when you say that the technology isn't innately evil. Technology is never evil -- it's neutral. It's how we use it that determines it's value.
      The main reason I don't like facial scanning is quite simple. I view it as a slippery slope -- we start scanning for a few "bad guys" now, and what happens a few years down the road when it becomes feasible to scan everyone to make sure they're not doing something "wrong"? If we give our government the power to watch us all the time, we've given up the ability that was guaranteed to us in the Constitution to think, and speak freely. If you've never read 1984 you really need to. The descriptions of the lengths that the man in the book went to avoid being observed will drive you nuts -- and make you really think about where this is going. Orwell was off by a few years -- but it wouldn't surprise me if it turns out he was only wrong by about 20-25 years.
    • Re:Unpopular View (Score:4, Insightful)

      by TheCage ( 309525 ) on Monday May 27, 2002 @01:08AM (#3589662)
      But what about the fact that it doesn't solve the problem? This is presumably to stop terrorism, yet a significant number of terrorists are not part of any database, which is required for something like this to work. Seems just like a waste of money to me.

    • > To be sure, I don't want computers flagging people to be arrested. But computers sift through enormous amounts of information, making them ideal for a first pass. If they are used to flag people to be scrutinized by humans, I don't have any objections.

      Are those humans going to be highly-trained well-paid experts like those who work airport security?

      The basic expectation is that the human 'supervisors' will adopt a strategy of either (a) waiving everyone that the computer identifies because they're tired of taking the heat for false positives, or else (b) calling for the cops every time the computer identifies someone, so they won't have to take the heat if a terrorist does get through. (Interesting problem, that. I would guess that we would see a lot of variety of individual behavior early on, after which it would settle into a state where all 'supervisors' behave the same. Presumably that would be state (a) except for during 'alerts' and for relatively short periods after real incidents.)

      The only optimal position between those extremes is get it right almost every time, i.e. to have a real expert (or team of experts) looking over the computer's shoulder. And I seriously doubt that society is going to be willing to pay for that.

      • This is an excellent point on why there will never be a technical cure-all for this problem, especially now that airport security is federalized. The backbone of government employment is avoiding responsibility for bad things. The level of "urgency" (for lack of a better word) cannot and will not be kept up by the people in charge of security, which will render any technical solution useless.

        Without human supervision, there will be too many false positives for the average person to stand for. Without *diligent* human supervision, the false negatives will slip through too easily.

        Not that I'm necessarily being critical of the security employees. It is only human nature. How many security checks and stops did you happily (or at least understandingly) endure in the months after September 11th that you grouse about now? Keeping security personnel at top alert all the time is the problem they should be working on. That and getting the INS to do their job.

      • Seriously, did the airport security increase that much post 911? The wage is still so low that the airport security corporations compete with McDonalds for manpower.


        If you start to think about it, wouldn't you say that the Bush administration should be thankful for the 911 attack? Now, Bush can do what he does best, show strong leadership. We all remember his campaign speeches, right?


        However, what kinds of strong leadership has he given? He has reconfirmed his alliance with Pakistan, the country run by a general that got his power in a military coup, under the banner of "protecting freedom". He needed to do this in order to punish the Taliban.


        Now, his poor judgement may very well be biting him in his ass. Pakistan has long offered support for the resistance movement in India-controlled Kashmir. How this support has manifested itself in real life is a matter of debate. However, India does not think Pakistan has done enough to crack down on the separatists in Kashmir after the attack on the Indian parliament in December. Consider it comparable to a band of terrorists attempting to storm capital hill, and then have the nation the terrorists came from refusing to stop supporting the same forces.


        What else goes on in Pakistan? Ever once in a while, you'll see small or large reports about how parts of the Pakistani intelligence service is sympathetic to Al Qaeda and the Taliban. Wonder how Mullah Omar got away? He travelled with a pile of money, paying off warlords that the USA trusted for free passage.


        Rather than effectively fighting terrorism abroad, your government seems to favor disclosing every non-specific, non-corroborated terrorist threat, complete with security checkpoints that close down this or that because of a suspicious package.


        It's looking bleak, folks. Any good conspiracy theorist (or reader of 1984 by G. Orwell) will tell you that keeping people afraid is a good way of controlling their ability to think rationally.


        Oh, and would you like to know what I believe to be the ultimate terrorist strike? Trigger a landslide off the continental shelf along the Californian coast. According to Discovery Channel, the ground shows signs of previous landslides. One or more large-scale landslides could trigger a huge tsunami that could wipe out portions of the coastal areas along the Californian coast. What materials are required? Honestly, I don't know, but I'm guessing a few recreational boats with primitive depth charges or timed mines would have a pretty good chance of triggering something if they had a good geological report.


        I hope I didn't make any Californians piss their pants. I'm just speculating. And I hope I won't have any government agency knocking on my door tonite.


        Then again, the most effective portion of the WTC attack might be the fallout. America is marginalizing itself, giving the rest of us ever fewer reasons to really like the American government. (I like Americans, btw).

    • To be sure, I don't want computers flagging people to be arrested.

      I do. If some dick attacks me on Saturday night and is clocked by a security camera, then he's spotted in the Mall the next day I want the police to know about it.

      On the other hand - when the UK becomes a complete police state (in about 6 months at the current rate) I DON'T want to have to cover my face going into a 'subversive' bookshop for fear of being arrested and questioned about my support for 'the way of Tony'.

      Ah dillemmas - where would we in the rich west be without them.

      Face Recognition or Feed People
      Daddy or Chips
      Daddy or Chips
      Daddy or Chips

      Chips!
    • Face scanning technology isn't innately evil. Like everything else, if we use it wisely, it can help. If we use it irresponsibly, it can hurt. No surprises there.

      Ooops, what you (and many others apparently) seriously fail to see is that all these face scanners can produce is false sense of security. Knowing that every airport used such a device it would be pretty damn easy for any terrorists or other criminals to modify their face enough so that the face scanner would fail for them (false negative).
    • So the question is who do you trust to use is responsibly and not abuse it?

  • False positives are fine, though a failure rate of 50% is clearly way too high. False positives mean that in those cases a suspect was actually identified correctly.

    It is the false negatives that are truly scary. If a known terrorist sympathizer can board a plane without setting off any signals then it is clearly a useless product.

    Luckily, humans have the ability to fuzzily predict terrorist-like behavior (now that everyone's on high alert, that is).
    • Let's see if you think a false positive is ok when the guy with the rubber gloves is up your ass to the elbow looking for explosives.

      False positives are as bad if not worse then false negatives.
      • Would you feel better with the explosives container found lodged in your ass at the crash site?
        • Considering the low chance of hijacking, I'll pass on the 1 in 1000 chance of anal intrusion, thank you.
          • Yeah. Those bombings [msnbc.com] never seem to happen anymore. [msnbc.com]
            • 610,000,000 people fly in the US each year. Being generous, in the last 5 years 6000 people were effected due to commercial air accidents in the US in the last year that's less then 0.0002%. Less people are actually terrorists by a factor of at least 1000. So now, you're detecting over 100,000 false terrorists for every real terrorist you might encounter. If you want to have false positives, you better get them considerably less frequently then 100,000 times as often a true positive.

              More perspective: That's ~610,000 people held for questioning due to false positives each year. Almost 2,000 a day. If you had to question 2,000 people a day, and you knew that out of those 2,000 people probably none of them were terrorists, how long would it take before you started doing a sloppy job? Talk about thankless work, and enormous expense.
  • by MikeD83 ( 529104 )
    How many times have you gone to a store and bought an item with an electronic anti-theft tag and not had it removed properly only to be stopped once you begin to exit. A loud alarm goes off, and everyone in the front of the store and looks at you wondering... is that a thief? Extremely embarassing. False Positives happen all the time. As long as they are dealt with in a timely manner it is still OK; and already deemed acceptable by MOST of society.
    • Clerks in bookshops don't have machine guns and don't have the authority to arrest and strip search you.

      Given the choice of a false positive in a bookshop and one at the airport I know which I would want to avoid.
    • by Phroggy ( 441 ) <slashdot3@@@phroggy...com> on Monday May 27, 2002 @01:32AM (#3589718) Homepage
      Actually, true story: I was at Fred Meyer's a few weeks ago (for those not fortunate enough to live in the Northwest, they sell pretty much everything, at decent quality and decent prices). In addition to my groceries, I'd picked up a pair of khaki pants. They've now got those self-checkout scanner things, in addition to the regular checkout lines, so I decided I'd try it. I didn't do so well. Anyway, in particular, I hadn't noticed that the pants had a security tag on them, and I neglected to remove it. I'm not sure how I would have removed it anyway, but the really large man keeping an eye on the self-checkout lines would surely have taken care of it.

      So I cram the pants and half my groceries into my backpack, the other half in plastic bags. I leave. The alarm goes off. It occurs to me that the pants must have a security tag that I didn't remove. I glance around, and nobody even looks my direction. I proceed to leave the building.

      Then I remember that I've forgotten to buy a bus pass. I go back in. The alarm goes off. I head over to the customer service counter, and shell out $56 for a little card that will enable me to get to/from work for the next month. I leave again, and the alarm goes off. I wait a few minutes for the bus, and go home.

      I completely forget about the security tag until I'm wearing the pants and am on my way to catch the bus to work. I've gotten about a block when I hear a noise as I'm walking. Sure enough, there it is. I run home, try unsuccessfully to get it off, give up, change pants, and run to catch the bus. I arrive at work 15 minutes late. When I get home I finish mutilating the tag. Tough little buggers.

      So anyway, the moral of the story is that those little tags are absolutely worthless if store security is asleep at the wheel.
  • we need to take a minute to figure out why it doesn't work. Or maybe, instead of that, look at recognition that does work.

    I, for one, am pretty much 99.99% correct when it comes to making positive recognition of those people around me that I see often. People I haven't seen in a few years, I have more trouble identifying. Why? Because people's faces change. Facial hair, glasses (or removal of them), makeup, etc. can throw a lot of people off. Can this technology compensate for that?

    I personally think that these cameras need to look at people the way we do, with two eyes. What do we get when we look at the world with two eyes? Depth perception. We can see objects in three dimensions, because we see it from two angles at once. If facial recognition computers were able to take in two separate data streams, like two cameras a foot apart, it would be possible to create at three-dimensional image of that person's face. And though it would require more computing power, it is much easier to make a positive match using three-dimensional data as opposed to two. Ever seen a perfect frontal view photograph of a person's face? Can you tell how long their nose is when you're looking at it? Isn't the length of a person's nose a significant facial feature? (Oh, and I know, if you see a person from the side, you see that, but these cameras are always only getting one angle, so they're always throwing out a lot of data. If you see a person's face from the side, you are not seeing how wide their face is, and so on.)
    • I suspect that your own accuracy rate is not nearly as high as you believe it is.

      First, as you state, that 99.99% accuracy rate only applies to a group of people you meet regularly; this probably includes perhaps a few hundred people, and a significant part of your total memory and processing capability is devoted to recognizing and updating your memory of those faces (check out a brian map for how much of our cortex is dedicated to face recognition.) Even duplicating that feat (i.e. identifying a small group of faces) would be a major undertaking for a computer system.

      Second, that 99.99% isn't nearly as impressive as it sounds, because it represents the positive rate, i.e. the chance that you will correctly identify an individual in the target population. That corresponds to a false negative rate of 0.01% -- you're saying that once in ten thousand times, you'll actually fail to recognize somebody you see on a regular basis. Not too encouraging, that.

      Third, that figure says absolutely nothing about the false positive rate, which I suspect is much higher. In other words, how often do you see somebody that you think you recognize, but can't quite remember exactly? From my own experience, I would say that number is as high as one in a hundred. Our own built-in face recognition system is simply designed that way -- to generate a large number of "near misses".

      So, the bottom line is: even the supposedly high accuracy of human facial recognition isn't accurate enough, and undoubtedly doesn't scale very well.
      • by Anonymous Coward

        check out a brian map for how much of our cortex is dedicated to face recognition

        How much is used for transposing of letters? :)

    • Yes, stereo imaging and depth are needed. But when you look at a person the brain stores a "pattern" of how to recognize this guy again. It discards a shit load of uneeded information.

      If you don't believe me, try to draw a portrait of a close friend with pencil and paper. You'll find out you can't or that it doesn't correspond to the real look. It's NOT that you can't draw (You can perfectly copy it if you have a B&W photograph). The thing is that you really abstract the look and only store tiny bits of angles, distances, colors, patterns, movements and facial expresions.

      You don't even know WHAT you are storing in the first place. Perception and pattern-matching are a very complex thing, and a thing far different than what one might guess.
    • And though it would require more computing power, it is much easier to make a positive match using three-dimensional data as opposed to two.

      Unfortunately, this apparently simple statement is not as true as it would seem to you, a human being equipped with staggeringly immense computational power and a brain specially equipped for this very task.

      In vision, there are two problems (at least). One is the usual problem of creating algorithms that can recognize things. The other is the staggering amount of data these algorithms must cope with.

      Many common vision applications (by which I mean not necessarily face recognition) involve taking the picture, which may start life at any resolution you please, sampling it down to 32x32 or 64x64 (if you're willing to stretch), dropping down to 4 or 6 bits color, and proceeding to do the analysis on this exponentially smaller sample size.

      Facial recognition algorithms do not always (often?) do this, but the problem of dealing with immense amounts of data do not go away. They simply exist in different ways. You're still trying to get invarient data (recognizing "bob #2423" no matter what bob is doing to fool the camera) out of a domain that has 2^(100*100*24) possible images (for a 100x100 full color RGB image; keep going up if you want something larger then 100x100, which is barely ID-photo sized.)

      Throwing more data at the problem does not necessarily get you ahead. You must always throw out the vast majority of it anyhow to get any real work done.

      (Also, you may be surprised; depth perception in humans is an interesting field of study. Less of it comes from your eyes then you may think; most of it comes from image processing. Your binocular vision has effectively no discrimination past six feet or something like that; I'd have to look the exact number up but it's shorter then most people would think.)

      • Also, you may be surprised; depth perception in humans is an interesting field of study. Less of it comes from your eyes then you may think; most of it comes from image processing. Your binocular vision has effectively no discrimination past six feet or something like that; I'd have to look the exact number up but it's shorter then most people would think.

        I probably ought to clarify that. In this domain the computer can indeed get a good depth perception shot if it wants. My point is that even humans make less use of this data then you might think, even at close range. Giving it to a computer adds new problems (handling that data), which may or may not be helpful anytime soon.

        "Some people, when confronted with a problem, think ``I know, I'll use regular expressions.'' Now they have two problems." - jwz [jwz.org]. It's similar to this, I think. Merely throwing more data at a vision problem often adds to the problem list more then it takes away, at our present state of knowlege.

        (Of course all of this is moot anyhow, because the math says even a human being isn't accurate enough to function as a facial recognition system anyhow. Computers aren't going to solve the problem. Nothing ever will. The math says it's impossible.)

    • > I, for one, am pretty much 99.99% correct when it comes to making positive recognition of those people around me that I see often. ... I personally think that these cameras need to look at people the way we do, with two eyes. ... Ever seen a perfect frontal view photograph of a person's face? Can you tell how long their nose is when you're looking at it?

      I certainly am not an expert in these matters, but based on half a lifetime's self-observation, I'm pretty sure that your recognition of your fellow humans is based on subtleties of appearance and mannerisms rather than on some hyper-analytical form-matching mechanism.

      I know that on several occasions I have been in a grocery store or somewhere and caught a former schoolmate out of the corner of my eye, recognizing him or her instantly. But as I approach to say 'hi' I get a better look and suddenly think that I have mis-recognized a stranger instead of correctly recognizing a former associate. It's only on the third or fourth look that I decide for sure that I should go ahead and say 'hi'.

      Also notice the frequent situation where half your friends think Little Joey looks like Mom and the other half think he looks like Dad. I hypothesize that that's because some are looking at (say) the shape of his nose and others are looking at (say) the shape of his eyes. I.e., humans apparently recognize people on a fairly arbitrary subset of subtle cues rather than matching a remembered 3-D 'mask' to their faces.

      As in so many other fields of AI, the technology that's on the market today falls far, far short of the basic abilities that humans -- and animals -- take so much for granted.

      I wonder what the best today's technolgy could actually deliver is. If you set a threshold of (say) a maximum of 0.1% false positives, what are the chances of actually recognizing someone in your criminal/terrorist database if they are actively trying not to be recognized? I suspect the performance is going to be pretty dismal.

    • Here's an article [usc.edu] by a leading expert, Irving Biederman, describing current thinking.

      He starts by describing basic object recognition; and he theorizes on how face recognition both builds on the basics, and yet, differs from, seemingly, all types of objects.
  • Thank God that the scanners are out... even if it's not quite by the nose it said it was.

    Besides the infrignement on civil liberties, what was troubling to me about the scanners is the reduction to a mathematical sequence... meaning quite literally, that we're just another number. How depressing.
  • I've got only one thing to say to the creators of this big brother device:

    IN YOUR FACE!

    heh.
  • Grousing... (Score:2, Offtopic)

    by Mulletproof ( 513805 )
    2002-05-19 16:06:51 Florida Face Recognition Fails (articles,privacy) -Rejected

    Gee, only beat this submission by about a month.

  • The notion that someone will repeatedly be "identified" as matching a particular face, is a very real concern for travelers. Already, we find that Americans with brown skin and beards, and especially persons who "look" Muslim, are hassled every time they enter an airport and often miss their flights. Non-citizens who "appear Muslim" should probably just give up on any idea of flying in the next few years.

    As noted, there can be no "get past ID check free" letter or ID card, since those would immediately be forged. And with a 50% false negative rate (missing a suspect 50% of the time), the system seems hardly worth using.

    I have not traveled by air since returning from Europe on September 19 (delayed from Sept. 12).

    In the past, I would have flown between the San Francisco Bay Area and the Los Angeles area (a 1-hour flight, using any of the airports on either end), but now it's actually likely to be faster to drive (around six hours each way), after including all the "waiting in line time," the increased flight delays, and of course the time to get into and out of the airports (park here, rent a car there).

    To be fair, of course, a system with a 50% false negative rate is presumably able to detect "known suspects" 50% of the time, which is almost certainly much better than human beings will ever do. Of course, the tests are probably being conducted under very favorable conditions, with an extremely small sample of "suspects." And of course, if the false-positives were equally distributed, we'd all be willing to suffer a one-in-a-thousand delay, if it actually had any meaningful benefit. (But we know that the false-positives won't be equally distributed, they will mostly affect persons in certain ethnic groups or with beards, etc., and while that means I'm less likely to be inconvenienced, I can't tolerate a system that punishes people for their skin color or ethnic background.)

    What's scary, to me, is that we are giving up so much (in many little bits and pieces) for so little benefit. On Saturday, I discovered that I couldn't use the restrooms in the BART (train) stations again, because they were closed to prevent someone from planting a bomb in them. Okay, so I had to hold it for an hour until I got home, big deal. And armed troops in the airports, and on bridges, okay, I can live with that one thing. And I can't drop off my express mail without handing it to a postal clerk now.

    But ding, ding, ding, we add up all the little "show-off" gimmicks and what we face is a huge impact that provides pretty much zero actual benefit. All the gimmicks combined might provide about 1% or 10% improved safety, at a much greater cost.

    While I was stuck in London during the week after September 11, I worried that things would change for the worse, not because of things that terrorists did, but because of the things we would do out of panic and fear and prejudice and idiocy. Things are nowhere near my worst fears, but I think things are very bad, and ultimately I believe that the terrorists have already "won" by causing most Americans to change multiple aspects of our "way of life."

    • While I was stuck in London during the week after September 11, I worried that things would change for the worse, not because of things that terrorists did, but because of the things we would do out of panic and fear and prejudice and idiocy.

      You say this, but further up your post you said this...

      I have not traveled by air since returning from Europe on September 19 (delayed from Sept. 12).

      Your reaction is one of "panic and fear and prejudice and idiocy", having travelled extensively, both in and outside of US airspace, the security on internal US flights is still worse than internal flights in Europe.

      So you've let a bunch of terrorists stop you flying, that's the reaction they wanted, why are you giving in...?

      Al.
    • I have not traveled by air since returning from Europe on September 19 (delayed from Sept. 12).

      In the past, I would have flown between the San Francisco Bay Area and the Los Angeles area (a 1-hour flight, using any of the airports on either end), but now it's actually likely to be faster to drive (around six hours each way), after including all the "waiting in line time," the increased flight delays, and of course the time to get into and out of the airports (park here, rent a car there).


      I've flown a couple of times since Sept. 11, and the only noticible slowdown i've experienced is the removal of curbside check-in. The LAX security checkpoint is faster now than before, more security goons + more metal detectors = better throughput.

      Oh, and btw, when the check-in person asks you if you packed your own luggage and watched it at all times, "I didn't bring any luggage" is not the answer they want to hear... I'd try "mu" next time, but I think they'd be even less amused.

      --
      Benjamin Coates
  • by MikeD83 ( 529104 )
    At the metal detector a passenger's picture is taken. It is then compared to the database of known criminals.
    A security guard is sitting in front of a computer next to the x-ray machine ready for a positive match.
    If you look nothing like the person. (different race or something like that) You would be let through to the gate and not even know you were positively identified.
    If it may be a good match- you get stopped. The operator already has some information about the criminal in front of him. The operator will do an on the spot quick check. One thing that crimanals are notorious for is tattoos. If the passenger doesn't have them (or signs of removal surgery) let them go. If the passenger is a very close match do a more thorough examination.
    Every night there can be an audit of the matches to make sure the security personel are doing their job. The system seems very effective to me.

    The system by Visionics looks at 80 different facial characteristics. The configuration used by the airport only needed 14 matches to produce a positive. It seems this is a setting in software and could probably be lowered to produce more positives. Even if they are false positives the sytem I menetioned above would do the job.
  • Why on earth would you post a story about some Florida airport giving up on face recognition, when we just heard that New York City is already using this technology? [yahoo.com] It is already being used at the Statue of Liberty and Ellis Island to scan faces as people board the ferry. Way to go Jamie - raising that journalistic bar of integrity and thoroughness for everyone!
  • Look-alikes? (Score:5, Insightful)

    by TheSHAD0W ( 258774 ) on Monday May 27, 2002 @01:40AM (#3589738) Homepage
    Here's one for you: What would you do if you looked like a terrorist?

    Let's say, some time in the future, they get the face-scanning technology to work right. 0.000001% false-positive rate. And it's implemented all over the US.

    Let's also say that, among the 250 million people in the United States, one or more people had facial structures similar enough to terrorists' that they would trigger those scanners. In fact, they'd trigger every scanner that person was surveiled by. And let's say that person were you.

    What would you do?

    You couldn't go to an airport. You couldn't go to a major public attraction. You probably couldn't go to a public place without fear of some alarm going off and people waving automatic weapons in your face. Would you cower at home? Would you wear a bag over your head? Would you sue the US government? How would you cope?
  • I think the problem with security these days is that many people are looking for a one-solution-fixes-all type of thing. People need to realize that (and geeks know this, we do it on our computers ;) there are, and should be, multiple layers of security.

    Employing facial recognition is just one thing we can do - granted, we need to get the technology to work better, but we need to realize that it's multiple systems working together that is going to stop terrorists, not one or two "miracle systems."
  • False positives (Score:5, Insightful)

    by Restil ( 31903 ) on Monday May 27, 2002 @01:48AM (#3589759) Homepage
    We're not talking about using this technology to make courtroom identifications. We're using it to notify security that you MIGHT have someone in front of you that is of less than reputable character. This doesn't mean you immediately cuff him and throw him in jail, but if he tries to walk through a screener checkpoint it MIGHT be a good idea to do a little better check than a simple wand wave. In the meantime, someone can be checking the pictures to see if that person's face actually matches the match the computer made. With a .1% false positive rate, you could have a couple paid employees just looking at matching pictures to see if there's really cause for concern or not. At the rate people go through screening checkpoints now, they'll get a "match" about once every 10 minutes or so, your mileage may vary with larger airports, its all a matter of scale.

    As for false negatives, even 50% is better than nothing as long as the false positive is much MUCH lower. Imagine catching 50% of the hijackers on September 11 before they boarded the planes. A lot of red flags could have gone up, and flights could have been delayed, the rest of the passengers could be more carefully scrutinized. No, this is not the solution to any problem. And no, it should not be used legally any more than a lie detector can be. Its a guide. It tells us where we might need to concentrate more of our efforts on.

    As far as threats to privacy go, this makes sense in an airport, but it does not make sense out on the street. People go into an airport expecting to be searched, questioned, carded, etc. They do not have the same expectation while walking down the street. So unless the cops are currently chasing someone, they lose him, and you have a striking resemblance, they shouldn't bother you at all.

    -Restil
    • This is one of the occasions where the privacy lobby goes too far. A 0% false positives isn't possible in a system like this. A human being wouldn't even get 0% false positives or negatives. How common is the feeling you've met someone before, even when that is unlikely? Or ave you spotted a long lost friend in a crowd, only to discover it wasn't him/her? There just are too many people that look alike.

      IMHO, a system that recognises faces in a manner that is needed for an airport, the recognising system shouldn't have to be 100% correct. As it isn't autonomous, but requires human confirmation to arrest someone, it's a tool for security. It's like an electronic wanted poster.

      Of course, I'm not saying a face recognising system was viable in this occasion. I'm sure the authorities did well not implement it. Yet I'm not so sure that it couldn't be an improvement in security without sacrificing any extra privacy.
    • Imagine catching 50% of the hijackers on September 11 before they boarded the planes.

      One small problem ... how do you get those faces into the database to be checked against? Some of the most "recent" photos of terrorists may be >10 years old ...

      So ... you have old photos of ~<500 known terrorists ... against ~>220 Million "good guys" ... you can see that you'll have so many false positives compared to real positives. (NOTE: numbers pulled out of my @$$ ... this is an example)

      One thing that was (and still is) really irritating about the whole "we-need-better-security" mentality after 9/11 is a fundamental problem.

      That problem is ... until you get to a "1984" society, there is no absolute security. The only security is a false sense of one.

      Suppose we HAD scrutinized passenger lists more ... then what? Any potential terrorists would know that. Use something that the drug dealers use ... mules (people who are paid to bring drugs across the border, and have little/no background).

      Now don't get me wrong here ... I think what happened was a tragedy, and I hope it doesn't happen again. However, given the openness of our society, I doubt that anything substantial will change in the long run.

      Just recently (mid-May) I flew to BWI (Baltimore/Washington), and the "security" was about the same as when I flew to Vegas a couple of years ago. In fact, during the Vegas trip, my carry-ons were inspected ... not this latest trip though.

      It's a balance between being secure and appearing to be secure.

    • Re:False positives (Score:2, Insightful)

      by Llywelyn ( 531070 )
      Under the spreading chestnut tree
      I sold you and you sold me
      There lie they and here lie we
      Under the spreading chestnut tree
  • ...because the false negative percentage means that someone who is probably exceedingly dangerous has a 1-in-2 chance of getting past your system. Making sure that Joe Bob doesn't have a bomb is something we're already doing now. Grandmothers, Congressmen, children are being frisked for bombs. At least these fasle positives would be somewhat more understandable.

    The false negatives just make an already porous system even more so because whatever face-recognition system that gets put in place would in all probability be relied on to make sure it at least didn't miss anyone. If these systems get in place, we'll be less secure, 'cause the guards won't be on as high an alert, thinking the cameras will do it all for them.
  • Figures... (Score:4, Funny)

    by Decimal ( 154606 ) on Monday May 27, 2002 @02:50AM (#3589861) Homepage Journal
    The system can't even tell Gore from Buchanan. Typical of Florida.
  • Facial scanners will never stop criminals / terrorists. So what's next? A law called.. oh.. I dunno.. FSATPA? uh.. "Facial Scanner Anti Terrorism Protection Act" which classifies makeup, wigs, haircuts, plastic surgery, masks, and colored contact lenses into illegal "facial scanner circumvention devices"? Not likely.

    IMO, face scanning is the single most worthless biometric in existance--not that I'd advocate any others. If entrepreneurs want to do something useful to increase security, they ought to improve devices which sniff for high explosives so I don't have to take off my frigg'n shoes every other time I fly.
  • by Jeppe Salvesen ( 101622 ) on Monday May 27, 2002 @03:37AM (#3589935)
    Where were you guys in stats class?

    If you took this technology, made it match on too many faces and then had someone manually double-check the potential match, you would have a kick-ass system.

    Like all powerful technology, its use must be ethical.
    • This problem is exactly analogous to the proposal to test all married couples for HIV that went around Chicago some years back. Surprise, surprise, the base rate of HIV among to-be-married couples was quite low. More false positives than true positives. Lots of wasted time, money, and stress on re-screening.

      As you may know, Bayes Theorem (actually a statement of fact in probability theory) says:

      Post-test odds = Likelihood Ratio * Pre-test odds

      (Where the likelihood ratio for a positive test is the sensitivity/(1-specificity), or TP rate / FP rate)

      If your pre-test odds of being a terrorist are very low (and when you consider how many terrorists fly compared to how many non-terrorists fly, they must be exceedingly low), you're going to need a very, very powerful ("highly specific" in medical terms) test if you want to reliably determine that a given person ought to be treated with greater care.

      On the other hand, if they were planning to spend a lot of time and money screening people anyway, and they could improve their sensitivity (TP rate), facial recognition might be a (statistically) sound approach to screening *out* suspects. That is, one you pass a face-detection screen that has a high TP rate, you don't need to be subjected to as much extra screening; but if you fail the face-detection screen, it's not really diagnostic.

      Normally, you could use my diagnostic test calculator [uic.edu] to fool around with numbers yourself and see what the impact would be, but it appears to be down until I can get to the server (dratted dist upgrade!)

  • Or, howcome they forgot that terrorists are not likely to go through the security checks looking exactly same as their photographs stored in the database.

    Ofcourse, the criminals will try to look different. And they will succeed. This system is based on corrupted principles, it is actually only good for recognising people who have no reason to change their face when entering the plane, it will recognise: your mom, your dad, girl nextdoor - but it will NEVER recognise the terrorist.

    It will only cause extra hassle, and added false sense of security.
  • This all assumes that the terrorists will not try to fool the system. If a face recognition system was implemented at a given place, don't you think the terrorist would try to fool that system in some way with some kind of "fake faces"?

    I assume that fingerprint readers should be much easier to make than this technology, correct? The fact is that those can be *very easily* fooled too! Read the latest Crypto-Gram newsletter [counterpane.com] for a story about how easy it really actually is - it's so easy it's almost scary.

    How easy will it not be to fool this then?

  • by Rogerborg ( 306625 ) on Monday May 27, 2002 @06:49AM (#3590221) Homepage

    Because when they finally get it working right, with a really high degree of accuracy, then it'll positively identify me, and I'll be allowed to exercise my rights to have and bear arms on an airline for the purpose of forming a well ordered militia. Surely this situation exemplifies the purpose of the second amendment; an armed populace defending itself from attack.

    What's that you say? That this won't happen? That security will still be something performed by bored and disinterested employees on the ground, not by the people under direct threat? That all this technology will do is to remove rights and further entrench the mentality that We, the People must be protected by a tiny minority of largely unanswerable and self appointed professionals.

    Sometimes I wonder why we bother even pretending that the Constitution still applies. If anyone can think of a more relevant application for the Second Amendment short of a full scale invasion, I'd like to hear it.

    • Because when they finally get it working right, with a really high degree of accuracy, then it'll positively identify me, and I'll be allowed to exercise my rights to have and bear arms on an airline for the purpose of forming a well ordered militia.

      The second ammendment apparantly also gives you the right to remain ignorant. Any guns brought on board would need to be specially designed for that purpose, so as not to create holes in the aircraft when YOU MISS your target (or completely penetrate).

      Not only that, but the main argument against arming people on board planes is that it makes it just a WEEEEE bit easier for a terrorist to bring arms on the plane or steal them from guys like you.
  • As I would the false negatives. If the software had a large percentage of false positives and a very low number of false negatives you could train people to use the software to "check" the results of the SW. In that case it could be useful by pointing out potential candidates. But if the SW has a high number of flase negatives it is useless becuase the people still have to do all the work and they have spent money on something useless.

    -- Tim
  • by Zero__Kelvin ( 151819 ) on Monday May 27, 2002 @09:28AM (#3590475) Homepage


    By reacting the way we are in the U.S. Osama Bin Laden is getting exactly what he was aiming for. He wanted to destroy the American way of life, and by removing the freedom and civil rights the way we are, he is achieving his goal. There is no longer any need for him to act. We have met the enemy, and it is U.S.
    • "I tell you, freedom and human rights in America are doomed. The US Government will lead the American people - and the West in general - into an unbearable hell and a choking life" - Osama bin Laden

      It's quite ironic how the terrorists have won the "war on terror" the moment the US government started it.

  • The decision by PBIA to not use the equipment had nothing to do with the accuracy of the equipment. Here [gopbi.com] is a less sensationalized story from the local newspaper which states "PBIA's decision to remove the equipment and not buy it reflects the federal government's takeover of airport security". The article mentions that the tests of the equipment were solicited right after the 9/11 incident, prior to the federal government announcing it would be addressing airport security. So the inaccuracy is not the reason this technology didn't end up in the airport.

    maru

On the eighth day, God created FORTRAN.

Working...