Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Software Science

Particle Swarm Optimization for Picture Analysis 90

Roland Piquepaille writes "Particle swarm optimization (PSO) is a computer algorithm based on a mathematical model of the social interactions of swarms which was first described in 1995. Now, researchers in the UK and Jordan have carried this swarm approach to photography to 'intelligently boost contrast and detail in an image without distorting the underlying features.' This looks like a clever concept even if I haven't seen any results. The researchers have developed an iterative process where a swarm of images are created by a computer. These images are 'graded relative to each other, the fittest end up at the front of the swarm until a single individual that is the most effectively enhanced.'"
This discussion has been archived. No new comments can be posted.

Particle Swarm Optimization for Picture Analysis

Comments Filter:
  • Wow (Score:5, Insightful)

    by Izabael_DaJinn ( 1231856 ) <slashdot@@@izabael...com> on Monday February 04, 2008 @03:27AM (#22288510) Homepage Journal
    I love an article on digital imaging technology that has no pictures. This is 2008. Send out your press release with a photo...of something...anything.
    • Re:Wow (Score:4, Funny)

      by Smordnys s'regrepsA ( 1160895 ) on Monday February 04, 2008 @03:57AM (#22288654) Journal
      Let me just point out your sig

      Careful What You Wish For..

      So, did you realize an optimized goatse fits your wish for a picture of "something...anything"?
    • Re: (Score:2, Insightful)

      A pictures worth a thousand words. Strikes me that what they are implying is that take a CCTV (MJPEG/MPEG) and correlating the differing images (frames/fields really). I dont think that manipulation of one CCTV image over and over will ever produce results like that seen on CSI!
      • Re:Wow (Score:4, Interesting)

        by MobileTatsu-NJG ( 946591 ) on Monday February 04, 2008 @12:51PM (#22293160)

        A pictures worth a thousand words.

        Strikes me that what they are implying is that take a CCTV (MJPEG/MPEG) and correlating the differing images (frames/fields really). I dont think that manipulation of one CCTV image over and over will ever produce results like that seen on CSI!
        Well, c'mon, CSI's trying to entertain an audience here. That said, video does offer some potential for enhancement. I've seen technology that can take a sequence of images, extract the motion out of them, and use that to work out a higher resolution image by watching how the pixels shift in color. There are cases where a car could drive into frame but the resolution is too low to make out the license plate. But if the motion of that car is extracted, and assuming that motion is actually of a useful vector, they can watch how the pixels shimmer and figure out what the color of the pixels in between them was supposed to be. When the image is reconstructed, the license plate could be read.

        It's not as magical or practical like they show on CSI, but there are cases where it can be done. Heck, Hollywood uses technology like that to slow down video like the bullet time effect in the Matrix. There's a lot you can do with motion vectors.
      • From TFA it sounds more like an evolutionary algorithm than anything to do with swarms. It said the word swarm over and over but didn't actually describe anything to do with them...instead it talked about how to solve the traveling salesman problem.
    • I love an article on digital imaging technology that has no pictures. This is 2008. Send out your press release with a photo...of something...anything.

      What do you think this is.. /b/?
  • The only problem... (Score:5, Informative)

    by arrrrg ( 902404 ) on Monday February 04, 2008 @03:30AM (#22288520)
    with PSO, ant colony optimization, genetic algorithms, etc. is that they take tons of computational effort, and typically work no better than (or significantly worse than) much more efficient direct optimization methods. Wake me up if they show good results (esp. that didn't take a year of computer time to construct).

    P.S. IAAAIR (I am an AI researcher, albeit not in computer vision)
    • by TapeCutter ( 624760 ) on Monday February 04, 2008 @03:46AM (#22288606) Journal
      "the fittest end up at the front of the swarm until a single individual that is the most effectively enhanced"

      Actually I think the biggest problem with any of these techniques is finding an algorithmic definition of 'fittest' and 'effectively', the rest can be solved by throwing money at the computation.
      • by somersault ( 912633 ) on Monday February 04, 2008 @05:26AM (#22288938) Homepage Journal
        Yep because it makes lots of financial sense to have a few supercomputers plugged into your TV so that you can get your contrast setup correctly..
        • How did you get 'makes lots of financial sense' from what I said?
          • Just saying that it isn't something that's worth throwing money at (unless possibly it's done on the broadcaster's side rather than the viewer's side). Maybe you're talking about throwing money at finding an algorithm in the first place, but it's going to need buttloads of computation time to run the process even after you've found a suitable algorithm. Maybe once we all have 5000 core computers then this will be a worthwhile use of computation time (because there's nothing else to use it on), but at the mo
    • Re: (Score:2, Interesting)

      by Anonymous Coward
      Yeah, and as such, you should know that this is just a manifestation of the No Free Lunch Theorem. Basically, they're trying to perfect one case at the expense of the others, resulting in possibly hundreds of poor matches to get one really good match. While this isn't the typical route that I'd try to take, it does have its own interesting applications.
    • Re: (Score:3, Informative)

      by NickBoje ( 1232704 )
      You are hundred and one percent right. PSO works mainly with the help of two arbitrary coefficients which are highly oscillatory. Main effort is involved in selecting those coefficient values, accurately. Very good technique but very few good applications solved ...,
    • by Mr2cents ( 323101 ) on Monday February 04, 2008 @05:04AM (#22288866)
      I wonder why they call it a "swarm approach". I'm always suspicious toward people using the latest buzzword, especially if what they are doing sounds like "sorting". The interesting part is the criterion they use, not how they sort the images.
      • PSO is a hill climbing algorithm that involves a population of climbers attempting to find the best outcome of an evaluation function. PSO differs from some other types of hill climbing algorithms in that after each iteration, the population converges upon the current highest ranked individual. The idea is that by moving through the search space towards the current best value, you may inadvertently stumble upon the optimal solution. In essence, the population is acting like a 'swarm', by constantly movin
        • PSO differs from some other types of hill climbing algorithms in that after each iteration, the population converges upon the current highest ranked individual. In essence, the population is acting like a 'swarm', by constantly moving towards the best known solution.

          This is *exactly* the problem with this branch of computational intelligence, stuff that you see at any CI/AI conference. PSO is a minor variation of stochastic hill-climbing -- it's a friggin heuristic. There is no guarantee that it will per

    • Re: (Score:1, Interesting)

      by Anonymous Coward
      True, but only for problems where "efficient direct optimization methods" are known. If you have a high dimensional search space and a multimodal objective function or say a multi-objective optimization problem - what then?
    • by corgi ( 549186 )
      From the synopsis: " Despite its potential it relies on only simple mathematics and does not need powerful computers to run, which means software applications based on PSO would not be limited only to academic researchers and those with access to supercomputers."

      There is an excellent treatise on a mathematical foundations of PSO in a book Fundamentals of Computational Swarm Intelligence by A.P. Engelbrecht.

    • Solomon's problem R112 of vehicle routing with time windows (VRPTW) has best solution found using Ant Colony (ACO) algorithm. You can bet researchers thrown *all* known algorithms (Tabu, annealing, genetic, ...) at it, still Ant wins. In other instances ACO has similar results to other algorithms while, from programmer perspective, is much simpler and more elegant.
    • Yeah, that's why NASA uses genetic algorithms for antenna design [nasa.gov] instead of doing it manually, and estimates that it takes less work.

      Err...
    • Re: (Score:3, Informative)

      by rucs_hack ( 784150 )
      with PSO, ant colony optimization, genetic algorithms, etc. is that they take tons of computational effort, and typically work no better than (or significantly worse than) much more efficient direct optimization methods. Wake me up if they show good results (esp. that didn't take a year of computer time to construct).

      Oh god, not another 'Bayesian methods for everything' guy..

      Genetic algorithms have major advantages over other approaches. When designed well they are easy to code, and they can get tasks done
      • Re: (Score:3, Informative)

        by ceoyoyo ( 59147 )
        Two excellent points about why you wouldn't want to apply a GA to photography, one yours and one mine.

        Mine first: you're right, GAs are easy to program, once you know the selection criteria. How do you have the computer select the best looking photo? Photoshop has for years had a feature where the computer will supply some altered images and let YOU pick the right one, but how do you give the computer a sense of esthetics?

        Yours: GAs are great for finding finished products that you can then use. Both GAs
      • Re: (Score:3, Insightful)

        IAAAIR...

        Oh god, not another 'Bayesian methods for everything' guy..

        I know the type, but...

        I have a GA that can outperform a neural network on a particular task

        Really? Sounds unlikely to me, because a NN is a function which maps inputs to outputs (sigmoid, sum, sigmoid, sum,...) and is often, but not always optimized with gradient descent. A GA on the other hand is an optimization algorithm. You could optimize an NN with a GA if you wished.

        Either way, a mapping function (eg an NN) is not really comparable t
    • You are right that a specialized optimization usually produces better results than a PSO. But there are many cases where it is very time consuming to develop such an algorithm when it is good enough to just use the plain PSO. Compared to other metaheuristics, PSO does not not need lots of fitness evaluations, and it is very robust because it has few parameters. When it is good enough to use a simple off-the-shelf PSO, why develop a specialized optimizer?
    • I have read several sources that say that PSO (and other stochastic algorithms) are the last resort -- what you throw at problems when they don't seem to be working any other way. Specifically, if you have no derivative information available or the derivatives are misleading, when you have interestingly shaped feasable regions or if you have many local minima, I think that PSO wins out on the 'total time to initial acceptable solution' criterion. Of course, if you are solving very similar problems repeate
  • The researchers have developed an iterative process where a swarm of images are created by a computer. These images are 'graded relative to each other, the fittest end up at the front of the swarm until a single individual that is the most effectively enhanced.'

    Um... if the computer knew how to tell a good picture from a bad, couldn't it have just created a good picture in the first place? This all seems rather useless/confusing to me.
    • Re: (Score:3, Interesting)

      by Radish03 ( 248960 )
      It can tell good picture from bad, but it's completely relative. Sure, it can come up with a picture that's better than the original. But that's by no means the best it can do. This process continually attempts to create pictures that are better than the previous picture, apparently repeating this process until an image is found where any adjustments to it result in images that are worse in quality. Then that one is selected as the best version.
      • There has to be a basis for judgment! If it can judge a good picture from a bad picture, than it has to know *specifically* what makes that picture better. Why not use that knowledge to jump to the best picture (that it can define) from the first picture, instead of picking the best picture from thousands of pictures that are randomly created from the original? I'm saying it seems like they're doing things the hard way.
        • Re:Just wondering (Score:5, Insightful)

          by TuringTest ( 533084 ) on Monday February 04, 2008 @06:02AM (#22289046) Journal
          it has to know *specifically* what makes that picture better. Why not use that knowledge to jump to the best picture (that it can define) from the first picture?
          Because the algorithm doesn't have that kind of knowledge. In AI-based search we don't know how to define absolute functions of quality, but we know how to define (several) relative dimensions of improvement. (Disclaimer - I do this for a living).

          Intelligent search is based on iteratively improving one of those dimensions, just a little bit, one at a time. This goes on until we find a solution that is as good as we can get in all dimensions at once; but we simply don't know how to combine all dimensions to create a formula that maximizes all them, because their relative improvements interact with each other in complex, chaotic ways.
  • not a good idea (Score:2, Interesting)

    by ILuvRamen ( 1026668 )
    I've seen what Photoshop CS3's auto levels function does to some photos. It gets it right most of the time and when there needs to be little adjustment, it makes a little one and for really bad ones, it makes big adjustments. You could say it's judging the quality of the input image. Well it's right about 75% of the time. When it usually gets confused is when a picture is supposed to look significantly reg, green, or blue and it has no way of knowing that so it screws it up horribly while trying to tone
  • No good heuristic (Score:2, Interesting)

    by randomc0de ( 928231 )
    This procedure sounds like it has the same problem as plain-old AI search - the lack of an obvious heuristic. The article says they use the number of pixels on an edge, but there's no obvious way of finding this - they've moved the computation up one step. The article is light on details so I'm sceptical. If they have a simple procedure for the fitness function, this is a great application.
  • yeah... (Score:2, Funny)

    by cosmocain ( 1060326 )
    This looks like a clever concept even if I haven't seen any results.

    Hell, this needs no comment, it's funny on its own. Mod TFB +1, accidently funny.
  • I'm currently googling for pics, but nothing comes up except for similarly-worded pages. Please post URL (via Coral) if you find one.
  • by vikstar ( 615372 ) on Monday February 04, 2008 @04:10AM (#22288702) Journal
    For more detail, including the citation of the paper, see this http://www.primidi.com/2008/02/03.html [primidi.com]
    • why is the world messing about trying to extract data clearly not present? most "decent" camera models are only 12bit anyway & use a bayer filter on a ccd sensor sigma released a camera last year without bayer stupidity, better to wait for the rest of the manufactures to catch up when we get full frame foveon sensors there will be no need for all this gueswork will there?
      • by ceoyoyo ( 59147 )
        Foveon isn't magic. You basically get triple the resolution in the sensor. Of course, the sensors are harder to make so for the same price you usually get about a third of the photosites....

        Bayer interpolation works very well. There is no missing information.
        • obviously there is missing info, but it's already guessed into place by the algorithms within the camera. I have been most disappointed with the output of my canon camera. when one considers that a 100iso/ASA film frame has the equivalent resolution of a 60meg sensor it is possible to know just how poor digital photo's are presently. it is so typical that the fans of everything digital will accept such poor quality compared to "old tech" it started with CD audio being 44.1k instead of the 2 x 64k considere
          • by ceoyoyo ( 59147 )
            You have to use some pretty creative accounting to get a 35 mm film equivalent to be 60 megapixel. Most experts (and my own experience with both) put a 6-10 MP Bayer sensor as approximately equivalent in resolution to a good quality color negative film.

            If you're so dissatisfied then you should probably use film.
  • Tantalizing - but not enough to go on, so it is pretty much useless. I found the abstract here [metapress.com] but it does little to elucidate the article.

  • by kegon ( 766647 ) on Monday February 04, 2008 @05:00AM (#22288844)

    They've reinvented genetic algorithms ?

    Without seeing the details (read TFA but it's a summary and quite a bad one at that), I can't see why this would be better than a Bayesian optimisation with a photometric constraint. "The objective of the algorithm is to maximize the total number of pixels in the edges" sounds very, very simplified.

    There are efficient ways of solving these things. Interesting that they invent an image processing algorithm but publish it in a non image processing journal - I wonder why that is ?

    • by varaani ( 77889 )
      PSO is quite closely related to genetic algorithms, and also Population Monte Carlo type of methods. In a computer simulation it probably doesn't matter all that much whether one "moves" existing particles in the search space or "generates" new ones based on the previous ones.

      The choice of the objective function (for example, some kind of Bayesian posterior probability) is surely more important, and unless there is something very special in the structure of the optimization problem, I think that it's a bit
  • Erm, anyone have a link to anything that's actually worth reading, not a short press release? You know, maybe with some PICTURES of their image processing...
  • Bullshit FTA (Score:4, Interesting)

    by EdIII ( 1114411 ) on Monday February 04, 2008 @05:40AM (#22288980)
    While reading the article I came across:

    However, none comes up to the standards of the kind of image enhancement often seen in fiction, where a blurry distorted image on a screen is rendered pin-sharp at the click of a mouse. PSO, however, takes image enhancement a step closer to this ideal.
    Unless I am REALLY missing something, it is next to impossible to go from a blurry distorted image to pin-sharp. Really close to impossible. It is a matter of data. If you start from blurry, you cannot actually obtain the information required to unblur it. It does not exist. Therefore, any results are fundamentally speculative. Contrast Levels are not exactly the same thing, since you are only shifting data already there. Edge enhancement, sharpness, is not actually representative of what the objects actually looked like. There is a big difference between taking a blurry box and enhancing the edges and taking somebodies face and effectively "refocusing" the image so you can see facial features more clearly. You could say this is a step closer and certainly novel approach to the problem. To actually get to science fiction levels of performance may be not actually be possible though.

    Such enhancement might be useful in improving snapshots of CCTV quality for identification of individuals or vehicle number plates
    Not really useful at all. At least from an evidence point of view. Since you cannot really be sure if that is the individual in the picture, the best you can approximate is closer to one of those sketches they provide. I'm not being racist, but certain races do look similar. If you took 100 Chinese people for example, and started progressively blurring their pictures, you would start to get pictures that you could not make a distinction between them, much less a definitive identification. So there had better be some corroborating evidence, since it won't take too much of an expert witness to shoot that down. So it would be better to say it could help identify possible suspects, not individuals. Burden of proof, reasonable doubt, and so on.

    Another thought, even more concerning, is that if you took those 100 pictures and showed them to a test group that saw before and after shots for each individual, how effectively could they make identifications? What about a test group showed only the after shots? My point being, is that if you are predisposed towards identifying a certain individual you are more likely to do so. In fact, people remember faces in a similar way be exaggerating facial features. I believe it is referred to as face perception. So it might be possible for the human brain to identify, incorrectly, an individual from one of those blurred images. All in all, not solid enough for legal purposes, which CCTV identifications of individuals and license plates are certainly used for.

    I could be wrong, but until I see actual pictures, I will have to play the part of the skeptic.

    Great idea, and certainly thinking outside of the box, so they deserve respect for their work.
    • Soon, every surveillance camera video will be enhanced, and we'll see the face of Elvis on every criminal where there was once a blur...
    • Comment removed based on user account deletion
      • Have you seen the results of what you linked to? Shitty, every one. It goes from blurry, to grainy -- w00! No information (re)gained. There is no free lunch.
      • by jamesh ( 87723 )

        Untrue. The information is spread into the pixels over which it is blurred. With the appropriate convolution matrix, you can recover the pinsharp picture

        All that article says is that you can make the image clearer. It even says that the zooming in that you see on the crime tv shows is not possible.

        If you had a high enough resolution you might be able to apply a convolution matrix to the problem to 're-focus' it, but once you have the image in a digital form with a finite resolution, you can't do that much w

    • Unless I am REALLY missing something, it is next to impossible to go from a blurry distorted image to pin-sharp. Really close to impossible.

      Actually, mathematically, it is completely impossible for most images. This is the same reason that any data compression algorithm must, at least some of the time, produce "compressed" files that are larger than the originals: if they didn't, there would be a many-to-one mapping, violating the pigeonhole principle [wikipedia.org].

      • Re: (Score:2, Informative)

        by EB FE ( 1208132 )
        The claim of TFA is not that their algorithm can take one blurry image and generate a less blurry image. The algorithm uses a series of pictures of the same subject (I assume something similar to bracketing exposures) and use the data from most of those images to sharpen edges in the image that already contains the most clearly defined edges. Imagining how this works is pretty simple. Suppose the best image has an edge that appears to be on pixel columns x and x+1 and those pixels have luminance values a
    • If you start from blurry, you cannot actually obtain the information required to unblur it. It does not exist.

      You can potentially sharpen parts of an image (to a degree) if the information exists elsewhere in the image. For instance if there are repeated elements in the image (images of text, man made structures, etc...). Human faces are also mostly symmetric.

      With CCTV you also have a series of other very similar images to get information from in order to sharpen a single frame.

    • Unless I am REALLY missing something, it is next to impossible to go from a blurry distorted image to pin-sharp.

      Actually it is possible. It has been done to uncover blurred out credit card numbers, for instance. Also, in addition to the methods used in TFA, one can use fractal compression. This matches the 'shapes' in the image to individual fractals, and allows zooming in much further than originally possible without producing pixellation. This is used routinely in the publishing business with low-resol

    • by ceoyoyo ( 59147 )
      That depends on how the blurring was caused. It IS possible (in theory) to sharpen an image to the point where it's MTF drops to zero. With most imaging systems the MTF drops off slowly, so there's quite a bit of sharpening that's possible. Deconvolution algorithms can work quite well in this area, particularly if you know the MTF. As the MTF drops nearer to zero you get extra noise, of course.

      What's not possible is sharpening up to frequencies above where the MTF is zero. Since your imaging system mul
      • CSI style surveillance camera enhancement is impossible, but you can get a surprising amount of additional detail out of a blurry photo with properly applied deconvolution.

        I agree. Check out the deconvolution examples using the Gimp Plug-in Refocus-it [sourceforge.net] which is based on finding the minimum of the error function using Hopfield neural network, or Refocus [sourceforge.net] which is based on a modified form of the Wiener filter, called the FIR (Finite Input Response) Wiener filter. Refocus is conveniently available as a Digikam plugin [digikam.org] as well as a gimp plugin.
        I've played with Refocus and have had some pretty good results with it, even better than unsharp mask, as the documentation states:

        In pr

        • by ceoyoyo ( 59147 )
          Unsharp masking is a spatial domain algorithm that basically increases local contrast on edges. It's effect might be sort of a high pass filter, but it's not in the nice simple way that a real high pass filter is.

          Deconvolution, on the other hand, is a direct high pass filter. With non-blind deconvolution techniques the filter is designed to counter the low pass filter that caused the blurring. With blind techniques you usually pick some likely blurring function (like a Gaussian) and then apply it iterati
          • Perhaps I should elaborate.

            The (nonlinear) threshold setting on a digital unsharp mask algorithm cause my high pass filter analog to break down, but otherwise it's valid. So ignore the threshold, for a moment, in the unsharp mask. The implementation of the unsharp mask is in the spatial domain as you said, but (without the threshold) it has a dual in the frequency domain. The unsharp mask uses a convolution of the image with a Gaussian for blurring, followed by linear additions and subtractions. Convo

            • by ceoyoyo ( 59147 )
              EVERY operation in the spatial domain has a dual in the frequency domain. Usually when you say high pass filter, you mean a particular class of operations in the frequency domain though. Unsharp mask behaves like a high pass filter, and may actually be one in the restricted sense you outlined.

              On the other hand, FIR filters, I believe, have direct equivalent frequency domain filters, even if they are actually calculated in the spatial domain, no restrictions necessary. They're pure convolutions, without t
    • by kegon ( 766647 )

      You clearly don't know anything about image processing, but hey, this is Slashdot.

      I could be wrong, but until I see actual pictures

      Seeing pictures would not prove anything. A ground truth comparison is what is required.

      Great idea, and certainly thinking outside of the box, so they deserve respect for their work.

      No, respect for trying but it doesn't look like more than a small improvement, if that. We have to get hold of this paper and see if the results are presented in an appropriate scientific contex

      • by EdIII ( 1114411 )

        You clearly don't know anything about image processing, but hey, this is Slashdot.

        Your right I don't know much about image processing other then a limited experience with Photoshop filters. However, this being Slashdot, I don't need to know what I am talking about right? Insult received.

        I am not claiming I understood image processing either. I am approaching it from a logical, mathematical approach. Especially since they claim that this could be used for surveillance purposes to a legal end. Image

        • by kegon ( 766647 )

          Your right I don't know much about image processing other then a limited experience with Photoshop filters. However, this being Slashdot, I don't need to know what I am talking about right? Insult received.

          I think it helps to keep things on topic if you don't make a long post speculating about things you clearly don't know. If you know you don't know, then how is it an insult ?

          I am approaching it from a logical, mathematical approach

          I don't see any logical or mathematical argument in your post. Now you

    • Unless I am REALLY missing something, it is next to impossible to go from a blurry distorted image to pin-sharp. Really close to impossible. It is a matter of data. If you start from blurry, you cannot actually obtain the information required to unblur it. It does not exist.

      But if you take another image of the same scene, you just captured some more information. An algorithm can attempt to combine this added information from two or more frames into a single image of the scene which has more information in it than a single frame.

      I think the part you are missing is that this is about enhancing a scene using multiple images of the same scene.

  • This looks like a clever article alright, even if I haven't bothered reading it.
  • Totally makes me think of Craig Reynolds's "boids" -- take a look:

    http://en.wikipedia.org/wiki/Boids [wikipedia.org]

    What's really cool is that boids force you to re-think how you define intelligence, well, at least collective intelligence. It's like watching ants at work. Love it.
  • Swarm intelligence is what I research. PSO is not really an algorithm, it is a metaheuristic. Of course when I talk with non-engineers I might also use the terms algorithm or recipe, but I would expect correct terminology on a site whose readership contains a large percentage of CS/EE degree holders.
  • well i do get a very big laugh from CSI and there " enhanced photos " 1) start with a very good shot 2) degrade it and say it is the orig. 3) show the true orig. image as the "enhanced" one i still like greycstoration , pde_TschumperleDeriche2D , and pde_heatflow2D
  • Just a note.. If you want to read some fun swarm-centric sci-fi, pick up Crichton's "Prey", where he writes of simple one pixel cameras injected into the bloodstream, then swarm together to form an eye which acts as a miniature video camera. Among other things, he also writes of how humans are swarms themselves, consisting of tiny little dumb cells that work together to form a supposedly intelligent life-form.
  • Pics or it didn't happen.

For God's sake, stop researching for a while and begin to think!

Working...