Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Software Technology Science

Casting Doubt On the Hawkeye Ball-Calling System 220

Human judgment by referees is increasingly being supplemented (and sometimes overridden) by computerized observation systems. nuke-alwin writes "It is obvious that any model is only as accurate as the data in it, and technologies such as Hawkeye can never remove all doubt about the position of a ball. Wimbledon appears to accept the Hawkeye prediction as absolute, but researchers at Cardiff University will soon publish a paper disputing the accuracy of the system."
This discussion has been archived. No new comments can be posted.

Casting Doubt On the Hawkeye Ball-Calling System

Comments Filter:
  • Why not use... (Score:5, Insightful)

    by Kagura ( 843695 ) on Saturday June 28, 2008 @11:45PM (#23987315)

    Why not use a radio transmitter in the tennis ball (or soccer ball or whatever) to record its exact position? I am certain this has been discussed and I wouldn't be surprised if it's already in use. The article's "Hawkeye" just works by optical analysis.

  • Anonymous Coward (Score:3, Insightful)

    by Anonymous Coward on Saturday June 28, 2008 @11:59PM (#23987393)

    They're reproducing stuff that's already known. Yes, Hawkeye can be inaccurate. However, it's MORE accurate than linesmen and certainly the chair umpire. That's why it's used as the definitive word.

    I'd certainly prefer it to be used otherwise - the best way would be to give the chair umpire the information from HawkEye and then let him decide whether to use it or not at any given time, properly educated about the types of errors the machine can make - but that wouldn't be as flashy, would it. So the advertisers wouldn't go for it.

  • by Anonymous Coward on Sunday June 29, 2008 @12:04AM (#23987417)

    Hawkeye and the like deliver a consistent result. It matters not at all if the ball is in by two Centimetres but is called out, provided that error is consistent throughtout the match.
    If both players, or teams, are playing by the same margin of error, the contest is fair.
    In cricket for instance, I would accept the computers call over umpires any day of the week!

  • by the_other_chewey ( 1119125 ) on Sunday June 29, 2008 @12:33AM (#23987557)
    The accuracy has absolutely nothing to do with the overdetermination of the system.
    If it had, it would be simple to reduce the number of cameras to three, and boom - perfect position.
    That's obviously not how it is.

    And of course does the number of cameras increase the precision of the computed position - the principle
    is exactly the same as for GPS, where more satellites are better as well.

    Using a certain fitting method (least squares, least absolutes etc.) has nothing whatsoever to do
    with something like "complementing the equations", that's just necessary because no measurement is perfect -
    You are arguing that multiple measurements do not increase the accuracy of a computed average because there
    are multiple averaging algorithms to choose from.

    Bullshit.
  • by Rakishi ( 759894 ) on Sunday June 29, 2008 @01:17AM (#23987751)

    A system such as Hawkeye CANNOT BE MORE ACCURATE than humans.

    Of course it can be, humans are not 100% accurate and even human eyes aren't 100% accurate.

    From the link in the article, the Hawkeye system uses 5 cameras to compute the 3D position of the ball. That's an overdetermined system of equations, which cannot have a unique solution due to observation errors in the camera views.

    That it's overdetermined doesn't matter since in the end the error of those combined non-unique solutions is still less than that of a non-overdetermined system of the same cameras.

    So Hawkeye has to complement the equations with an ARBITRARY rule, eg least squares, and this arbitrariness makes the Hawkeye estimate neither more accurate nor less accurate than humans, just different. FYI, there are plenty of other arbitrary rules that work, eg least absolute errors, maximum entropy, etc.

    That it uses an arbitrary rule says NOTHING about it being capable of more accuracy than a human. Accuracy is easy to determine (via experimentation if you wish) and claiming that we somehow magically can't measure it is idiotic. For example a checkers program plays the game differently than a human but one can still claim the program is better than a human (since no human can beat the best checkers program from what I remember). It may be possible that neither humans nor this system are better in every case but that still doesn't mean one can't inherently be better (ie: if the cameras are accurate enough). In fact even if one doesn't dominate the other one can still uses some measure to determine which is more accurate (on average, etc.).

  • Re:Why not use... (Score:4, Insightful)

    by Drathos ( 1092 ) on Sunday June 29, 2008 @01:47AM (#23987857)

    Fox tried to do that with hockey back in the 90s in order to make the puck easier to see on TV (personally, I've never had a problem seeing the puck). The Glow Puck was horrible. When there was a jam up in the corner, it would literally be bouncing all over the screen. It also changed the way the puck performed on the ice. Because of the electronics and battery inside, they couldn't freeze the puck like they normally do, causing it to bounce a lot more and not slide on the ice as easily.

    In a hollow sphere like a tennis ball, how would you keep the dynamics of the ball the same as they are when you add a transmitter to it? If you adhere it to the side, the ball will be off balance. If you create some internal structure/support to keep it centered, you change the deformation during a bounce/hit.

  • by SnowZero ( 92219 ) on Sunday June 29, 2008 @01:51AM (#23987875)

    A system such as Hawkeye CANNOT BE MORE ACCURATE than humans. From the link in the article, the Hawkeye system uses 5 cameras to compute the 3D position of the ball. That's an overdetermined system of equations, which cannot have a unique solution due to observation errors in the camera views.

    Luckily there's a 100+ year old discipline called statistics, and 60+ years of literature on tracking to help you out in these cases.

    So Hawkeye has to complement the equations with an ARBITRARY rule, eg least squares and this arbitrariness makes the Hawkeye estimate neither more accurate nor less accurate than humans, just different. FYI, there are plenty of other arbitrary rules that work, eg least absolute errors, maximum entropy, etc.

    While I can't speak for the designers of the Hawkeye, in tracking there are very good reasons to choose one form of error minimization versus another. It only seems arbitrary because you are not informed on the subject, but there's plenty of free papers out there to read and discover.

    To explain current methods, please start out with this paper [google.com], in particular Figure 2, you'll see that the sort of errors you get from a camera are indeed well fit by a Gaussian. While a camera's perspective transformation is not purely linear (and various forms of distortion make it also non-linear), a good camera with a decent lens estimating the ball location within a limited area is well approximated by a linear model (and you can characterize just how much the error is). Now, a bunch of cameras with a Gaussian error distribution in the image plane with a linear projection out into the world is still a Gaussian (with a transformed covariance matrix). You can then multiply the independent measurements from multiple cameras to get a better estimate. Add a time series to that and apply this recursively and you get a Kalman filter [unc.edu], something invented for aerial tracking and still in widespread use today. If something is good enough for missiles to intercept other missiles, it ought to be good enough for a tennis match.

    If the linear approximation not good enough for you, you can use a Rao-Blackwellized Kalman filter. If that's still not good enough because you want to use another error distribution or non-linearizable dynamics, set up a particle filter with a whole lot of particles and enough CPU to simulate it. The point is that what you call arbitrary is a well studied field which is many decades old. You'd be best served by learning about it first before you cast away all that work. I'm not a "tracking" person, just a user of there work. When a field of science has done its job well enough that it has become common engineering, and you can go look up whatever you need in books, with all the derivations, caveats and tradeoffs laid out there for you to see, I would say that that field has done a pretty good job.

    The whole media story around this paper is ridiculous. It's a paper from a social sciences department about how the public does not understand the fallibility of these machines due to noise. That's all this paper is about: Hawkeye has error. I hate to break it to the uninformed, but all measurement systems have error. From Galileo [wikipedia.org] to Gravity Probe B [wikipedia.org], your results can only be as accurate as your measurements, calculations, and statistical models will allow. You can decrease error with various methods, but you can never completely eliminate it. People should not be able to get out of high school without understanding accuracy on measurements, and some rudimentary statistics, but unfortunately our education system hasn't been able to reach that goal. As a result, the public doesn't understand error, and might come to believ

  • by DriedClexler ( 814907 ) on Sunday June 29, 2008 @02:05AM (#23987935)

    I'm confused. Why would umpires oppose a technology that can automate the refereeing of a game? It just doesn't make any sense.

  • by Rakishi ( 759894 ) on Sunday June 29, 2008 @02:30AM (#23988017)

    You're missing the point. Accuracy makes no sense unless you include the error criterion. Any estimation algorithm has an arbitrary error criterion, as do humans. Neither is more accurate than the other, they're just different estimation procedures.

    No, talking about the size of the error makes no sense if you haven't specified a regularization criterion. Now choosing the criterion is essentially equivalent to choosing what the theoretical answer should be, so it's circular reasoning to claim that the resulting error would be smaller.

    That's a silly argument because it basically says "nothing is better than a human because a human is no optimized for the problem" or "we can never determine what is better because we need to first determine what better is and there is more than one possibility." You can I'm assuming create a system that is in fact more accurate across all error criteria but that's a separate point. There already is an error criteria in place since human judges must somehow be chosen and evaluated. The computer system in fact uses a different error criteria as a stepping stone because the true measure is not as easy to write in an algorithm.

    No it's not. If it were, there would be no issue. The issue is that these systems converge to some estimate, but the estimate need not be meaningful.

    This system is designed in the end to determine if a ball is on one side of a white line or the other. THAT is the error criteria and everything else is irrelevant or just a step to that end goal or an easier to measure version of that end goal. Interestingly enough the comparison isn't versus humans but rather versus humans using the existing computer system.

    For example, do you want to minimize the Euclidean distance of the estimated position against the true position, or do you want to minimize the error in a single coordinate only, or maybe you want to minimize the roughness of the trajectory of the ball over some time interval, or ....

    This is irrelevant to using over determination since the problem probably also exists with even 3 cameras. I'd guess that their placement or design parameters can easily lead to different error measures being optimized.

  • by Moridineas ( 213502 ) on Sunday June 29, 2008 @02:36AM (#23988043) Journal

    I'm willing to concede that you are talking theory at some level I don't fully grok. What I think you're completely missing in this discussion stems from your original statement that"system such as Hawkeye CANNOT BE MORE ACCURATE than humans", which does not seem to be possibly true by any standard definition of these words that I am familiar with.

    You can talk about "error criterions" and odd offtopic tangents about targeting algorithms etc, but the bottom line is, your original statement is completely wrong.

    You say "So Hawkeye has to complement the equations with an ARBITRARY rule, eg least squares, and this arbitrariness makes the Hawkeye estimate neither more accurate nor less accurate than humans".

    That's both wrong and illogical. Yes, Hawkeye is estimating a solution, and using a "arbitrary" (again, this is utterly bizarre and incorrect word choice--the makers of Hawkeye have presumably done a great deal of testing to pick an algorithm, which is NOT arbitrary) method to estimate. However, if Hawkeye ESTIMATES the correct answer more often than a human judge then, Hawkeye is more accurate than a human judge. The methods it uses are really completely irrelevant to the final answer.

    So in short, it seems that this is a discussion in your usages of "accurate," "error," "arbitrary," etc are different than the rest of the people in the thread.. Please let me know if I'm misinterpreting something though!

  • by Rakishi ( 759894 ) on Sunday June 29, 2008 @03:13AM (#23988207)

    Just because an umpire is the final word doesn't mean that a system can't do better than him, That is because the umpire is in fact he trying to measure something with a right/wrong answer. Specifically the umpire is the person who decides if event X happened or not which means that the goal is to see if X happened or not (not to see if the umpire thought X happened or not). The umpire isn't an inherent part of the rules but simply a judge to determine if something specified in a certain rule happened or not. As a result it's a perfectly valid problem to predict this event X in a method that is better (ie: lower misclassification) than the umpire. Finding the winner in a horse raise is one example of where technology is more accurate despite the rules likely having a person originally be the final judge.

    One problem is that sometimes one can't measure the true answer in some way so there is no way to truly measure accuracy for a problem. That is a valid problem however I have no clue if that or something else is the actual problem you're so concerned about (your posts are as clear as black mud). In this case there probably are more accurate systems of measuring the truth although these take excessive money, time or preparation. One could for example cover the ground around the line with wet paint (or some such) and then check for breakages, or simply cover the ground with pressure sensors. The article implies they can measure the accuracy of the system compared to the true impact point which means that one can devise experiments in which one can measure the truth of where the ball lands.

  • by Grey Haired Luser ( 148205 ) on Sunday June 29, 2008 @03:44AM (#23988335)

    I'm sure you've all noticed that since the
    introduction of Hawkeye, Networks have all
    consistently stopped showing those wonderful
    slo-mo replays, which, more often than not, would
    simply show that the machine was in error.

    The irony, of course, is that those replays are being
    ignored just at the time when high speed camera technology
    was getting good and cheap enough to be useful for umpiring.

    A much better system is to have players be allowed
    to ask an umpire for a video replay on demand, being able
    to be wrong at most twice in a row.

  • by TapeCutter ( 624760 ) * on Sunday June 29, 2008 @03:44AM (#23988339) Journal
    "system such as Hawkeye CANNOT BE MORE ACCURATE than humans.......You're missing the point."

    As the other poster implied, your first assertion is what the "point" is. Speaking of points, your last paragraph doesn't seem to have one, it basically says different problems have different equations and answers.

    I would also suggest that an emprically derived 4mm error is demonstratably more accurate than any human and no amount of irrelevant math will change that. If what you and TFA are trying to say is, "it's foolish to belive technology is foolpoof", then a primative "duhhhhh" response is all I have.

    Trivia: The aggrived player must call for the hawk-eye decision if he disputes the umpire, each player is only allowed 3 disputes per somethingorother (lady friend is the tennis fan).
  • by stranger_to_himself ( 1132241 ) on Sunday June 29, 2008 @04:46AM (#23988553) Journal

    Yes, some people also want to use Hawkeye for some decisions in cricket, the sport that first used it. However the margin of error is far greater (approximately +- 2 inches) in cricket as the cameras have to be a lot further away due to the size of the pitch.

    The other key difference in cricket is that Hawkeye is used to predict where the ball would have gone had it not hit a pad, whereas in tennis it only needs to say where the ball actually was.

  • by aussie_a ( 778472 ) on Sunday June 29, 2008 @06:48AM (#23989029) Journal

    This is an American website. If you want to be understood by the majority of visitors, you need to use the American terminology.

  • Re:Why not use... (Score:3, Insightful)

    by jez9999 ( 618189 ) on Sunday June 29, 2008 @06:51AM (#23989041) Homepage Journal

    Could someone explain to me how this would be any more precise than high-quality optical analysis? Usually, in the slo-mo replay recorded by even the *average* quality cameras for TV audiences, you can almost always tell whether a ball was in or out. Make that higher quality cameras with a higher frame rate, and optical analysis seems like a very good way to do it, to me. It is, after all, what human umpires do.

  • Re:Why not use... (Score:3, Insightful)

    by rant64 ( 1148751 ) on Sunday June 29, 2008 @06:59AM (#23989073)

    Also, a radio transmitter cannot account for the distortion of a ball upon impact

    I seriously doubt that an umpire can.

    Hawkeye's also being used in snooker now, and it actually looks very accurate. The refs always re-spot the ball at least 2 inches away from the spot where it was, and I don't see why they're not using this more often.

    Honestly, even if the Hawkeye system is off by a few millimeters, if I were a pro tennis player then I'd rather have a call which is at most 3mm off than being called by an umpire who maybe wasn't paying close attention and calls whatever he thinks is right.

  • Re:Why not use... (Score:2, Insightful)

    by Anonymous Coward on Sunday June 29, 2008 @07:12AM (#23989139)

    ...leaving the final decision to a human from TFA is quite reasonable and is how it should be.

    No it isn't, it's ridiculous. On what basis is an umpire supposed to over-rule a machine with 4mm accuracy? True, the machine may be "wrong" from time to time but by trusting a machine you create a deterministic rule set which is completely neutral. It is precisely the fact of removing a human from the equation that makes Hawkeye so useful and so MacEnroe proof.

  • Re:Why not use... (Score:3, Insightful)

    by nine-times ( 778537 ) <nine.times@gmail.com> on Sunday June 29, 2008 @09:05AM (#23989621) Homepage

    Blindingly trusting technology or discarding it altogether is unreasonable.

    I disagree. Since this is a game, it seems to me the most important thing is that the rules are applied consistently and impartially. Accuracy may be the goal of making the rules, but once the rules are set, I'm much more concerned about the consistency and impartiality.

    They played tennis for quite a long time without the technology, and so it's evident that discarding it altogether wouldn't be so bad. Accuracy isn't really the issue. You could decide all disputes with the roll of a 12-sided die, and it would still be fine, in the sense that it would become part of the game and players could adjust their strategies accordingly. As long as it was consistent, it would be fair.

    So the only question in my mind is, is the Hawkeye inaccurate in a way that would cause players to use strategies that would be bad for the game. For example, if it were truly random, then players might start appealing every call. If they have nothing to lose and a random chance at success, then why not?

    But the use of the Hawkeye system doesn't seem to have any effect like that on the game, so I don't see what the problem is with trusting it blindly. Even if it makes occasional bad calls, they don't seem to be any worse than the call a ref would make. If anything, in those instances, it might even be better for a machine to make an arbitrary bad call, because at least you know the machine won't favor a particular player.

  • Re:Why not use... (Score:3, Insightful)

    by CastrTroy ( 595695 ) on Sunday June 29, 2008 @12:53PM (#23991429)
    They've been using human referees for a long time, but they really haven't had any other option up until this point. You could say the same thing about the introduction of the car. Everybody's been using horse and buggies for getting around for a long time. Since they've gone so long without the technology, it's evident that discarding it wouldn't be so bad. Even if the error is as large as 1 cm, I would say that's pretty good. How good is the accuracy of human vision at a distance of 40 feet with an object moving at 200 km/h?
  • by Oktober Sunset ( 838224 ) <sdpage103NO@SPAMyahoo.co.uk> on Sunday June 29, 2008 @06:25PM (#23993909)
    Obscure to people with a life, but as this is slashdot, there aren't many of those around.

The optimum committee has no members. -- Norman Augustine

Working...