Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!


Forgot your password?
Trust the World's Fastest VPN with Your Internet Security & Freedom - A Lifetime Subscription of PureVPN at 88% off. Also, Slashdot's Facebook page has a chat bot now. Message it for stories and more. ×
Graphics Software

Image Processing By Example 127

Aaron Hertzmann writes: "My collaborators and I will present a paper called Image Analogies at SIGGRAPH 2001 this summer, where we describe a machine learning method for 'learning' image filters for example. For example, given a Van Gogh painting, the algorithm can process other images to look somewhat as if they were painted by Van Gogh."

"It can also 'texturize' images based on a sample textured image, e.g. to create landscape photos. It can do many other types of filters, as long as you give appropriate 'before' and 'after' examples to learn from." I especially like the idea of inferring a high-resolution image from a low-res one. The software is available for Windows and Unix, and "the source code is freely distributed for educational, research and non-profit purposes."

This discussion has been archived. No new comments can be posted.

Image Processing By Example

Comments Filter:
  • by Anonymous Coward
    i just noticed that resizing link. Thats awesome - i cant think how many movies i've seen where they did the same thing and i was thinking "yeah right". I cant wait till someone makes filter that loads all the websites from coolhomepages.com and spits out composition to my empty canvas in photoshop ...
  • by Anonymous Coward
    ... a vehicle for people to toot their own horns and advertise their OWN papers, etc?
  • I just wonder what it would look like if you passed Monet's Water Lillies through the VanGogh filter. An approximation of an impression of an impressionist's painting. Figure out the logistics of that one.

  • I'd like to see what would happen if you took a printed circuit board and made a printed-circuit-board filter and passed Monet's Water Lilies through _that_. Machines of loving grace, indeed ;)

    Why are so many people talking about Van Gogh rather than talking about taking pictures of industrial wastelands and applying the industrial wasteland filter to woodland scenes, or their own face, or computer renderings? This is potentially a _fascinating_ tool, and I want one. But the last thing I'd use it for is imitating paint brushstrokes. I can make brushstrokes with real paint, or fake brushstrokes with Painter or something. I can't make a digital image of a guitar made out of water ripples with anything but this- at least, not so effectively.

    This _is_ a great thing, but it's not an artist replacement. It's a tool. It's convergence. Your ability to use it is very much dependent on what you can imagine, such as taking a portrait and doing it all in the textures of woodgrain while keeping the colors the same, or giving everything the textures of concrete, or fur, or cloth. The fact that it's not strictly texturemapping is what makes it potentially huge in significance...

  • Because "geek humor" in general isn't funny.

    -- Brian
  • Eh, you're probably right.

    I'm also DEFINITELY able to spell, which really makes me an outcast around here.

    -- Brian

  • Could this one day be used to turn my dull crud into something Fitzgerald or Hemingway or even Asimov or Heinlein might have written?

    Then you could take news from say CNN and process them to be Slashdot-ready. Imagine a Jon Katz or Taco filter.
  • Raaaahh!!!! Sometimes you read things and it just makes you angry. I read my comments filtered +3, but this really takes the cake. Troll anyone?

    The point is NOT to make a picture that captures Van Gogh's brilliance. It is to create a _computer_program_ that identifies the _essential_ elements of a visual transformation. (ie. learns a photoshop filter)

    And from a SINGLE picture mind you.

    This technology is _really_ something. Think about how you'd write "a photoshop filter that makes a picture look like a Van Gogh". (I used to work at Corel, in Photo-paint, in the BitmapFX group, so I know something about this :) We'd literally work for weeks to come up with something like that. Just THINK about that. You've taught a computer "write programs" that a computer programmer would take weeks to write.

    #include <malloc.h>
  • Perhaps this software could be trained to learn from images of masonry, cliffs or pebbles. It could then apply the knowledge to create 'petrified' images of Star Wars actresses.
  • You'd still need a pretty good filter to get from the blurred Van Gogh to a Van Gogh imitation, a simple sharpen or brush-stroke filter won't do. The program can be downloaded from the page so go ahead and try another blurred image as source and see what it comes up with.
    Oh, and check out the other examples as well. There's also an example where you draw by hand a very simple picture indicating where the river, city, etc. is located on a photo, then when you have the filter draw the same kind of simple image and create a whole new image of a similar city.
  • i wouldn't even call it "painting". pc does nothing but manipulate the existing image. doh.
  • At first, I was going to say, "Well, I guess this is further proof that using computers doesn't count as fine art." (Only joking, I'm a BFA student and use computers plenty.)

    Then I looked at the images this program produces. I guess I can't expect anything BETTER than what is stuck into it - Van Gogh would never have painted scenes like those. Hell, any beginning art student could "paint like Van Gogh" - just mix your paints a bit thick! Perhaps the program needs a content selection algorythm. (I'd like to see it do a self portrait.)

    At any rate, the "holy grail" of this technology is to emulate air brushed velvet. Not a fan of elvis but love the look of painted velvet? Just use a pic of your favorite celeb! Finally, Natalie Portman recognised as she should be!
  • There's this cool plugin for GIMP which works on a similar concept as the patterns by numbers thing to remove objects from images by filling in the empty space left by removing them with a pattern copied from around the object. Sort of like using cloning to remove things, but automatic.
    Have a look at http://www.csse.monash.edu.au/~pfh/resynthesizer/ oh and it's probably also in your installed plugins directory called Resynthesizer
  • by geophile ( 16995 ) <<moc.elihpoeg> <ta> <oaj>> on Tuesday June 26, 2001 @09:33AM (#127273) Homepage
    So, you're going to find all the pr0n and remove the ears? Whatever floats your boat.
  • the real advantage both masters have is not in their actual prose but in the ideas they express - and no filter is going to be able to duplicate that.

    Au contraire - see sci-fi book a minute at rinkworks.com. :)
  • This was an (admittedly sad attempt at) humor, my friends, humor...
  • I think the point is that you train it in how to make up resolution in a certain situation with a certain type of picture. Could be useful in some situations, not that I can really think of any, but if I could well, then I'd have tons of karma and stuff.
  • The problem of finding "tanks" or whatever is actually an old one that goes back to the late sixties and seventies when the CIA was planning on sending up real reconaissance satellites with multispectral capabilities and NASA was sending up their interplanetary probes. The math used for this problem is older than that, but some new methods are being implemented by both government agencies and private corporations. It used to be much easier when we were just taking visible light photos. For instance, the wife of a friend of mine used to specialize in examining runway lengths. All she would do all day is look at photos of runways and compare their lengths to previous images to determine if they were being upgraded for bigger/more powerful planes. Now you can imagine that given the same image of a runway, but now you have twenty to one hundred and twenty separate images at different spectral wavelengths why there is a need for automation. Unfortunately all of the methods to analyze this type of data require fairly significant amounts of computational time and analyst time to extract the information you are looking for. There are many ways of going about looking for these "tanks" or whatever you want to look at in images (SCUD missiles, concrete desguised to look like granite, gold bearing strata, oil bearing strata, alpha ganglion cells in the retina, etc etc etc....) Using traditional multispectral classification techniques, one does not neccessarily have to implement neural networks. (There are times they can be problematic, but this is probably due in many ways to immature code) You can simply use supervised classification techniques where you specify the spectral fingerprint (you have to know what you are looking for) and have the algorhithm extract or highlight the pixels it finds. Alternatively, if you do not know exactly the spectral fingerprint of what you are looking for, you can perform an unsupervised classification such as a k-means or ISODATA classification. These techniques can be fairly time intensive from both a computational and an analysts perspective thus the push for more automated methods. It is the automation that has proven to be difficult. Many things like co-registration of hyperdimensional images is easily implemented using embedded fiducial points, but the actual analysis and taking apart of hyperspectral data often is more of an art than science (I hate to say that as thats what people say when they don't understand all of the aspects of the problem but....)and as such it currently requires lots of "biological" as opposed to silicon supervision.

  • It's near the right-hand side of the keyboard.
  • It looks like the training data needs to be similar to the target data if the results are to be visually similar. In the watercolor example the training data of the apples produces the best results on the photo of the tulips. The filtered landscape photos don't look anywhere near as good. I wonder if you could use several sets of training data (still life, landscape, portrait, etc.) to create a more general purpose filter.

    I would think that approximating the unfiltered source part of the training data from a painting (like the Van Gogh example) would produce kind of twisted results from photographic data. I wonder if they've tried getting a "real" painter to mimic Van Gogh's style from a photograph and using both the photo and the painting for training data. I expect the results would be better. (Although that wasn't really the case with the pastel examples so maybe not.)

    Maybe I'll get a chance ask at the show.
  • by Darth Maul ( 19860 ) on Tuesday June 26, 2001 @09:55AM (#127280) Homepage
    I'm seeing a lot of posts just saying "Big deal, it's a Photoshop filter". But that's not the point. The point of this is that given the source and the final image, the algorithm learned to reproduce the filter. In otherwords, it dynamically generated the "Photoshop filter" based on the input and output of the sample set. That's pretty cool!

    It's not just using the filter; it is creating the filter.

  • Take a look at some of the other images. And they didn't just "go into xv and sharpen the image" they got the computer to figure out how to do it on its own, given only two inputs!

    I doubt you could program something similar if your life depended on it, looser.
  • That is what they did, although in the case of the paintings, they had the 'after' shot to start with and simply blured it to get the 'before' shot. In other words, there was no 'orgional' filter.
  • by delmoi ( 26744 ) on Tuesday June 26, 2001 @04:01PM (#127283) Homepage
    . In the case of Van Gogh, you just need to fuzz up the image a little. But there's a lot more to Van Gogh than fuzziness!

    Wow, THEY DID Did you even read the site? or just look at the example? Here's one of some guy named Lucian Freud [nyu.edu]

    And keep in mind they didn't 'just tell the computer to 'fuzz up' the image' they just gave the computer a copy of Starry night, and a blurry copy of Starry night and said 'figure out how to go from the blurry one to the original'. After that, the computer did all the work
  • by harmonica ( 29841 ) on Tuesday June 26, 2001 @12:06PM (#127284)
    You can use PPM compression for this.

    Say you have samples of the works of N authors and a text T that has to be identified. Compress the text N times, each time the system is initialized with the samples from another author. T will usually compress best when the system was initialized with the samples from its own author.

    See Bill Teahan's PhD thesis [rgu.ac.uk].
  • Well, while filters have existed in PhotoShop or the GIMP for ages, they tend to do a not altogether excellent job. Did you look at the one where they took an image and applied a watercolour filter? The result looked nothing like a watercolour, but had encoded in it what some folks might think a watercolour produced from an image might be. They then fed other photographs into the engine and produced what look to my untrained eye like real honest-to-goodness watercolours. I think that's pretty neat: they created good stuff by mimicing bad filters.
  • The point is, this is an ecellent first step on the way to making software which can paint. It may not be here today, but we are progressing twoards that end. What I find amusing is that this software has no concept of aesthetics or beauty; it is by no means a thinking machine. And yet it creates what we find to be aesthetically attractive pieces. Indicates to me that perh. artists are not more deeply connected to the soul or heart than the rest of us after all.
  • Van Gogh was not an expressionist. He was a post-impressionist.

    I only took the trouble to correct your mistake because I disagree with your fairly backwards view of art...this idea of the artwork as a vessel for "the true meaning of the artist" is really trite.

    Another pedantic slashdot reply. I am an ass.
  • Norman, I truly did not write that in the spirit of a troll. The word/pixel thing was something I pulled out of my ass to try and make my point, I do realize how stupid that actually is, I just thought it would help some people to visualize what I meant.

  • by AntiFreeze ( 31247 ) <antifreeze42@@@gmail...com> on Tuesday June 26, 2001 @10:11AM (#127289) Homepage Journal
    This is actually quite impressive software.

    What I've started to wonder is where else it's underlying principles could be used, or where this sort of technology could lead in the future.

    Could it be used to analyze text from certain authors (hey, text and art are no different to a computer - treat words as "pixels" and sentences and structures flow like colors) and mimic their style? Could this one day be used to turn my dull crud into something Fitzgerald or Hemingway or even Asimov or Heinlein might have written?

    I also have the following few questions:

    • What happens when one feeds a Van Gogh through the Van Gogh filter? Does the resultant image change much?
    • Does the program apply the "filter" differently depending on what type of input it encounters, or is the same method applied to all input?
    • Conversely, can the program be used to recognize when a work is of a certain artist?
    • Or can it be used to see if an image has already been passed through a certain filter?
    • Are there cases which cause the method to fail or create an undecipherable image? And if so, are these cases unique or do they conincide with a certain type of artistic style? [e.g. Monet -> Van Gogh just won't work right?]

    I think that sums up my feelings. This stuff is really impressive guys, I hope the conference goes well.


  • As soon as you get a computer to do the work and create the thoughts that Heinlein put out, I think its time for the human race to quietly disappear into the backround to be reused for carbon.
  • http://www.photools.com/ [photools.com]

    windows only though

  • If this was going on in an unclassified lab almost 10 years ago, I imagine that the best gov't computers today can easily Spot the Loony, or tank, or asian woman or nearly anything else desired.

    See the following interview with Igor Aleksander. Search for "WISARD"


    "Rubbish", said the Professor, "we were doing that way back in 1982! Not only could WISARD recognise faces, it could tell you what sort of expression they had, too; whether, for example, they were smiling or frowning. We trained it simply by showing it examples of different faces registering different emotions, which it then stored to its memory. One of the earliest applications was to identify soccer hooligans."

    "What? It could look at someone and say that his eyes were too close together and his forehead too low, so he must be a villain?"

    "No, it compared input faces against a database of faces of known hooligans. WISARD also had commercial applications. We built a machine for recognising bank notes, and another that could sit on a production line and scan passing cakes to ensure all the cherries and the chocolate whorls had been put on. Basically, if you had the right training set, you could teach the system to recognise whatever you wanted it to."
  • Good grief. :-( I thought there for a second that the moderators were showing slightly better than usual troll detection, then I came back and this has been moderated up to +5!

    Clue: The word/pixel gibberish. Geez.

    Could it be used to analyze text from certain authors (hey, text and art are no different to a computer - treat words as "pixels" and sentences and structures flow like colors) and mimic their style? Could this one day be used to turn my dull crud into something Fitzgerald or Hemingway or even Asimov or Heinlein might have written?

    No. A local filter is a local filter. It is not a painter, nor an author.

    If you put your dull crud through a filter it'll turn it into a *SHOCK* filtered version of the same dull crud. Try the jive filter if that sort of thing impresses you.

    What happens when one feeds a Van Gogh through the Van Gogh filter? Does the resultant image change much?

    Yah, same as passing a finger painting thru it - you'll get a version with the same local distortions added. It's a filter, get it?

    Does the program apply the "filter" differently depending on what type of input it encounters, or is the same method applied to all input?

    Does it say it's data adaptive? No.

    Conversely, can the program be used to recognize when a work is of a certain artist?

    No, neither can it make toasted sandwiches or draw mandlebrot sets. It's a filter.

    Or can it be used to see if an image has already been passed through a certain filter?

    No, it's a filter.

    Are there cases which cause the method to fail or create an undecipherable image?

    No, it's a filter.

    And if so, are these cases unique or do they conincide with a certain type of artistic style? [e.g. Monet -> Van Gogh just won't work right?]

    If you put a Monet painting thru the filter it'll look no more like a Van Gogh than if you put anything else thru - it'll look like a filtered Monet.
  • So what? It's just optimizing the filter paramters to reduce the difference between the source and target. The only moderately interesting work they did was the manual step of choosing the parameterized filters. The program itself is just a best fit algorithm, and the fact that some impressionist styles can be approximated by local distortions that approximate brushstroke styles doesn't turn a best fit algorithm into Van Gogh.
  • Since when did Slashdot become... ... a vehicle for people to toot their own horns and advertise their OWN papers, etc?

    Yeah! A story is so much more interesting when it languishes unknown in the vastness of the web for six months, only to be discovered by the time it has officially become Old News!

    Bring back the good old days of virus reports from even mistier pasts.
  • by Bemmu ( 42122 )
    Simply amazing.
  • Ever see "the Sentinel"...
    Computer nerd, my ass. What about the ubermensch whose senses were sharpened in the Brazilian rainforest to the point where he can clearly see the limited edition Willie Mays watch visible for a split second on the two-pixel wrist of a bank robber caught on grainy security camera?
  • I seem to remember a PBS special (Nova?) that had a segment on experiments done with image recognition in pigeons. These pigeons were placed in a large, darkened box that displayed a projected image on one wall. They were then prompted to respond to the image in different ways and were given food feedback if they were "correct" (pecked when the right image was displayed). The pigeons were first introduced to one image and then shown related images as well as unrelated images to see if they could generalize... it emerged that pigeons can distinguish Monet paintings from Cezanne, though they have some trouble telling Cezanne from Picasso.
  • well, not really, the brush strokes are too predictable and don't always work. take a look at some of Van Gogh art and you will see that he did more than side to side brush work.

    they do look very intersting though...
  • Except that the book-a-minutes are neither automated filters nor (IMHO) do in fact capture the essence of the book. From what I can see they are generally a poorly defended criticism of the book, or a synopsis of one or two points in the plot.
  • And my point is that the book-a-minutes leave the claims unsubstantiated.
  • Actually, text and art _are_ substantially different to a computer. The former has rules, the latter does not.

    If a single pixel is off by a bit, you won't notice - your brain subconsciously blends the whole thing together anyway. If, on the other hand, the wrong word is chosen, it will stick out like a sore thumb - even if it's only a preposition.

    Finally, I would posit that even if a filter could make your prose sound like Asimov or Hienlien, the real advantage both masters have is not in their actual prose but in the ideas they express - and no filter is going to be able to duplicate that.
  • OK so, maybe you're hopped up on smack and can't function normally, but if you'd read the linked article you'd see that this is very much related.

    There's a new wave of parameterless image filters rising up into a whole new field of study and you're sitting here bitching about Van Gogh's intent.

    Van Gogh and his whore's stupid gift of a severed ear are not relevant ... guy.
  • by Jafa ( 75430 ) <jafa@NosPam.markantes.com> on Tuesday June 26, 2001 @09:52AM (#127304) Homepage
    There are a few companies (iterated.com, lizardtech.com, some others) that have been doing fractal and wavelet scaling for a while now. Pretty impressive stuff. I don't know all the theroretical details, just the practical uses. Scaling up to about 1600% is possible with no noticable artifacts (to the human eye). We've been using some of this stuff in the prepress/graphic arts industry for a while.

  • I especially like the idea of inferring a high-resolution image from a low-res one.

    Wow! Does that mean that a certain staple of movies with spy-cameras crucial to the plot will finally have a basis in reality?

    Every time I hear, "I can't see his face because the video's not clear enough - here, let me enhance the image," I just wanna scream...


  • Did van Gogh really paint tulips with ripples? It seems that the ripple effect used for the real ripples has a strong impact on the derived filter and dominates the result. In addition, the ripples do not follow the natural lines in the image.

    I think manual filters are still much better, oops, I wanted to say: nothing can replace a real van Gogh.
  • So when's it going to show up in the Gimp? This is some cool stuff. People could make custom filters without programming anything, just train the filter to do what you want. Let the computer do the thinking in terms of imitating the effect you want. Very very impressive.

    "I may not have morals, but I have standards."
  • Van gogh is boring. What about a software that 'looks' at all my pr0n, and categorizes it?
    Or maybe a program that looks at pictures of people, and generates pictures of those people having sex.. (like Xena and Scully; Taco will like [slashdot.org] that one)


  • by BerkeleyBull ( 101498 ) on Tuesday June 26, 2001 @02:05PM (#127309)
    Sometimes, when I look at a Van Gogh, it breaks my heart,it is so beautiful. When I look at these pictures, I get the same nausea-induced feeling that any cheap knock-off imitation gives to me. It's not even interesting technology.
  • I'm guessing you didn't read the article... Datamining may very well have the same effect as something a process a human could think of (like the one you explained) but in most cases is more general and that is what makes it usefull. It can make it VanGogh like, maybe you could too, but the same program given a bunch of paintings by, say me, could make it look kinda like I did it... Could you think of a procedure that given some of my art could make it look like mine? probably not, since you don't know what my art looks like (stick figures mainly.) But this could.....


  • Can it make a non-sexy image look sexy again?!

    Sorry dude, you're destined to remain on fugly.net

  • This is really cool, but I don't think it helps those who argue that computer graphics is an art. When you can't tell the difference between human art and computer generated art, is it really an art? or does it pass some Turing-esque test?
  • Hmm. I wander how good a job does it do with p0rn images. Can it make a non-sexy image look sexy again?!
  • Maybe you could apply it to a well known computer system and give it the look and feel of another system. If this works call your filter a gate. You could bill for your gates...
  • Slashdot is certainly an unusual forum to discuss scientific publications.
  • Well, they do have an algorithm which is based on some math and thay have a preprint there. You should look at their web page.

    Yes, I call it scientific. As to the quality of the science - I don't know enough to have an informed opinion.

  • It seems possible to apply this technology as a more general filter. If you take data encoded by a process and the result, make a great number of training sets, then it should be possible to take some result and go "backwards" through the encryption process to get the result. The problem is there isn't nearly as direct of a link between the decrypted and encrypted code as there is between the old visual object and the new visual object.

    I guess I'm just curious of how much this is limited to the "visual" world and whether it could be applied to create abstract digital filters.

  • Admittedly, yes, Van Gogh was a post impressionist, but expressionism derives directly from himself and Gaugin to a large degree (and perhaps even Japanese watercolor).

    As for my interpretation of "the true meaning of the artist" - whether or not I agree with the concept of all art needing some intangible, metaphysical, or even spiritual explanation that is derived from the ego of the artist (which I don't necessarily) - Van Gogh has become a cause celebre for the artist whose life was as important as their work. It's part of the reason for the ridiculous prices associated with his work. His letters to his brother established him as a figure after his death, and a cursory read would reveal that there were specific personal and deterministic notions in his endeavours. It may be trite to yourself, but it drives the ridiculous art market today, and many would say that it began with Van Gogh. Van Gogh's work is part and parcel with his life, interminably, for better or for worse - it is impossible to seperate him from his time and its technology and image processing capabilities.

    And I do stand corrected on what's different about this work and what it's for (imagine using texture by numbers to generate landscapes for computer games rather than storing the textures). I will stand by the fact that when it comes to mimicking an artist's strokes the material will only ever be as good as a) what's fed into it and b) a pale imitation, good for only some FX gag I expect to see in some music video and then vanish.

  • by garagekubrick ( 121058 ) on Tuesday June 26, 2001 @09:41AM (#127319) Homepage

    This is interesting and all well and good, but ultimately where it fails is that the produced image is entirely dependent on the original photograph's perception of the world. A reproduction of an image through halide crystal activation, which is enough for human memory and recognizance, but it lacks the true meaning of the artist. Van Gogh never used contrast or flat lighting as exhibited in the source pictures, and he often burst highlights with striking colors that may not have been actually present to his eye. It's what seperates him from a Turner - not just his brush stroke or how thick he worked in paint but how he saw the world. It's pretty churlish to adopt the first real expressionist painter (who deliberately attempted to paint their perception of the world rather than reproduce it) as an example of this algorithim, as the resultant images show that without an interpretation or perception this is pretty useless stuff. All I see here is a souped up photoshop filter.

  • Van Gogh filters are cool and all, but in this geeky day & age, I would be more impressed with a code filter that could be trained to make my C programs look like they were written by Linus Torvalds... you know, change the indention styles, white space usage styles, identifier naming styles, and how bout going as far to use the same algorithmic patterns too?

    The question is, what would happen if you fed your average Visual Basic program written by your average VB coder, through a Linus Torvalds filter? Wouldn't that be like "crossing the beams" in Ghost Busters?
  • There already is a Slashdot Story Generator [bbspot.com]
  • ...how long will it be before somebody creates a VanGogh version of goatse?
  • I kind of wish they would have added one more picture to compare which was an actual high res shot.

    So you could compare low res, high res, and filtered high res.

    They do look impressive though!
  • Unfortunately, it appears that the sample they gave it to learn from contains a vast majority of horizontal brush strokes, as Van Gogh tried to emulate the rippling of the water and match that in the night sky. The only non-horizontal strokes are in the stars, very slightly in the buildings and in the boat at the bottom. I wonder what it would have looked like if they had taught it from Sunflowers [vangoghmuseum.nl] or Irises [vangoghmuseum.nl] or even The Bedroom [vangoghmuseum.nl].

    It would be even more interesting to take The Bedroom and produce a before picture for it that straightens the odd angles and de-fisheyes it. What sort of schizophrenic things might it produce then?

  • Corel painter 6 formaly Fractal painter has an auto Van Gogh filter. Fractal Painter 5 introduced the auto VanGogh filter.
  • To me, the most interesting part of the project is their patters-by-numbers. I can see this being useful immediately. Anyone know of Photoshop filters that do similiar tasks?
  • Because they generated the "before" shot of the training pair, not the "after" shot.

    i.e.: They started with a painting by Van Gogh, then ran it through the "Smart Blur" filter of photoshop to remove the Van Gogh-esque-ness of the painting, leaving a textureless image. Then they ran the texture learning algorithm over that pair of images, and applied the learned texture to the target.

  • Training a computer to act like Van Gogh reminds me of the funniest piece of computer humor I have ever read [netfunny.com].
  • Hey this is pretty cool. You take a good photograph, and make it into a bad painting. Paining is more than just mixing oil paints to be the average color for part of an image... Van Gogh paintings typically emphasize the edges, the cheekbones, the jaw, the eyebrows. I'm bitter that someone can take an image, use xv to sharpen the image and then shift the colors and get a PhD out of it. I wonder if he was funded by the department of defense?
  • If you'd even read the rest of the site, you'd realise that their software is *very* accomplished

    Check out the Texture by Numbers [nyu.edu] sections for more examples of the flexibility of this software...


  • Then you are DEFINATELY in the wrong place!
  • As I was going through the page I was fairly unimpressed, largely. Much of what was there has been in Photoshop (and probably the Gimp) for years now, i.e. filters to make photo's look like paintings or apply textures or whatever.
    However ... the texture by numbers section was exceedingly cool and I could see a myriad of uses were someone to port it to Photoshop's plugin format. Being able to take an image, mask it with various different colors, and then have an entirely new image generated based on the textures you extracted ... yeh, thats nifty =)
    By far the coolest thing I've seen on /. in a while.
  • This is actually quite impressive software.
    Impressive how? Lots of paint programs have Van Gogh filters. It's easy to "reproduce" a given artist by superficially imitating some well-known aspect of his or her style. In the case of Van Gogh, you just need to fuzz up the image a little. But there's a lot more to Van Gogh than fuzziness!

    I'd be more impressed if they had trained the software to imitate multiple artists, such as O'Keefe [byu.edu], Rembrandt [mystudios.com], Klee [postershop.co.uk], Gauguin [artprintcollection.com], Monet [cafeguerbois.com], Picasso [guggenheimcollection.org], etc., shown as a single image, as processed by all of the above, side by side.


  • They were working on this kind of image recognition in a JPL lab near my own in the early '90s. (I was an intern working on the flight computer for CRAF/Cassini at the time.) They were using some funktastic optical computer jive and I heard the thing was good... damn good. There was even a toy parking lot full of Hotwheels cars, and they would have the computer try to pick out a specific car, which might be in a group of other similar cars, or partly covered by a building, etc.

    If this was going on in an unclassified lab almost 10 years ago, I imagine that the best gov't computers today can easily Spot the Loony, or tank, or asian woman or nearly anything else desired.
  • Comments by Anonymous Coward are automatically scored as zero unless they are modded up. OTOH, some posters with very good karma are automatically modded up.
  • Wow, maybe those parts in movies (that we all criticize) where the computer nerd enhances a still, low-res image to show the bad guy's face in a blurry crowd of people might become reality sooner than anticipated. Then we'd all feel bad for laughing at those scenes...


  • There are already computers that behave like Salvador Dali. They run windows.

    Windows user: Fucking computer, it's on drugs.
    computer: "I don't use drugs. I am drugs."

  • oh yeah, forgot a few:

    windows dali box:
    "So little of what could happen does happen."

    "Have no fear of perfection -- you'll never reach it."

    "It is either easy or impossible."

    "It is good taste, and good taste alone, that possesses the power to sterilize and is always the first handicap to any creative functioning."

    "To gaze is to think."

    "Those who do not want to imitate anything, produce nothing."

    "Wars have never hurt anybody except the people who die."

  • Actually, what they haven't shown (that I saw) that would be most cool for this is to have a photo of a scene and a painting of that scene and have the filter learn that! Otherwise, what they have done that's still pretty neat is built a system that figures out what mathematically needs to be done to one set of numbers (i.e. the RGB bytes for each pixel) to get it to a second state. Not bad.
  • What I've started to wonder is where else it's underlying principles could be used, or where this sort of technology could lead in the future.

    I was thinking along the same lines! (umm, well, brushstrokes?) Granted the amount of computation likely precludes real-time generation today. But, I could well imagine that within (pulls number out of the air) 2-5 years, it should be quite possible. Applications? Here are some ideas (some are admittedly off-beat, but why not?)

    • Tivo: Watch Friends, Battlebots, etc. in a "whole new light".
    • Home videos: Show videos of your trip to the Grand Canyon, ala Rembrandt.
    • pr0n: Sometimes, the suggestion is more powerful than the details. Enter the impressionists!
    • Cartoons: Bugs Bunny, Wylie Coyote, and Marvin the Martian done YOUR way.
    • Video Games: Would make for a real challenge to try and play as simple a transform as a watercolor rendition of quake. For the truly adventurous, try a Picaso filter! :)
    • Any Videotape or DVD: You get the idea.

    I have no illusions the results would be spectacular, but I'm quite sure they could be really interesting. Heck, even a commercial or a presidential speech could take on a whole new perspective.

  • Did anyone else notice for the "Van Gogh" images, it only did horizontal brush strokes. I'm not any kind of art buff, but I don't think artists would only use horizontal brush strokes. Although maybe it's a problem with the training data.

    Also, the "image enhancing" stuff wasn't too impressive, yet. Maybe with more examples it might do better. Of course, the thing to remember is it's not programmed for any particular filter, but it still had a problem ignoring small details, like the black borders for the rugs.

  • by BlowCat ( 216402 ) on Tuesday June 26, 2001 @09:37AM (#127343)
    For example, given a Van Gogh painting, the algorithm can process other images to look somewhat as if they were painted by Van Gogh.
    Another algorithm, given a post on Slashdot, can produce comments to look as if they were created by Slashdot readers.
  • to see Dali through cubist eyes....
  • I think what is missing is the composition. Van Gogh didn't just paint with funny streaks and wobbly reflections. He picked subjects and views that he wanted to express. Computers can do it, but not in the same vein as an artist. We would need a thousand monkeys first. Oh and a some absinth (Absinth.com).

  • If you have a "before" and "after" shot (therefore implying that you have a/the effect filter), then why don't you just apply the same filter to image B that you did with A to get to that "after" shot?

    That is, why a "learning filter?"

    It would be more useful if it could discover a technique by looking at one image.

  • True, I have to agree with apnu.

    I hope the sample images aren't the be-all-and-end-all extent of their program w. respect to Van Gogh-like rendering. Painting is to represent what you see in a permanent form - I believe Van Gogh would have turned a sky full of clouds into something other than a smudge of off-white patches - that he would have tried to create an accurate 'representation' of what he could see onto the page.

    But nonetheless, it is an impressive learning algorithm and perhaps has a future in processing for image cleanup....you know, that crap they show in movies where a 'computer guy' presses a few buttons and a blurry satellite photo suddenly shows license plate numbers...

    Let's just say I was impressed by the overall approach, but not by the examples given (brush strokes to uniform and lacking attention to the defining details of the images).

  • This is what it is exactly and nowhere near the original style of Gogh. Those familiar with his style can straight away differentiate in the style, as the strokes are all horizontal and the artists srokes were in all directions giving a bold movement to the painting.

    Looks like these guys have much to improve. Yet it is an effort in the right direction and we can congratulate them, for at least their program can copy lesser styles, if not the masters.

    And above all no matter how a good a copy it is, the copy can never measure up to the original.

  • Easy there, Jin. I think most of us are smart enough to understand the difference between art and the programmed output of a machine. Even if by some fluke you and a computer were to produce the exact same image in the exact same medium, only yours would be art--pretty much by definition, I would think.

    To make a machine produce art would require the technology for a sentient computer. If the machine is really sentient, then it's probably as entitled to express itself as you or I.

    BTW, I like your stuff (sorry, that's the limit of my knowledge of art--I like it or I don't).
  • This guy posts the same link as me half an hour after I post it and HE gets modded up?
  • If someone can use this to encode DeCSS, so it cans till be decoded, maybe THAT will convince people that its an artistic piece.

    Seriously, though, this is an amazing project. I'm impressed.
    Blood, guts, guns, cuts;
    Knives, lives, wives, nuns, sluts.
  • As I was going through the page I was fairly unimpressed, largely. Much of what was there has been in Photoshop (and probably the Gimp) for years now, i.e. filters to make photo's look like paintings or apply textures or whatever.

    Yes, Photoshop and the Gimp have had similar filters for a long time, but that's not the point of the project. Filters that are used in the Gimp and Photoshop were hand created and are each specialized and unique. This algorithm provides a general framework for learning how to apply a given filter. Ie: it can be used to easily create filter types that didn't exist just based on sample inputs and outputs. It's broader than the relatively simple filter operations found in typical imaging tools...

    I must admit though, the texturizing feature is damn cool.

  • by sllort ( 442574 )
    you know it's only a matter of time till someone Van Gogh's the goatsecx man.

  • by bacaloca ( 447079 ) on Tuesday June 26, 2001 @10:05AM (#127374)
    This learning algorithm looks similar to a neural net. My CS professor told us a story about the US government making a neural net that would be used to recognize tanks in satellite photographs.
    They spent many hours loading photos for training data. Some photos had tanks and others did not. Once they thought it was working, they tried it on some non-training data --it didn't work.
    Instead of recognizing tanks, it learned to distinguish cloudy days from ones with sunshine. All of the pictures with tanks were taken on a cloudy day & all other pictures were taken on days that had sunshine.
    He didn't know if they ever actually got it working or not...
  • I think the important point of this article is not that they can make the painting look sort of like a Van Gogh painting.

    I think the new important discovery is that they can take an image and process it so it looks sort of like a Van Gogh, or a painting I did, or a painting your pet chimp did. Or maybe not even necesarily a painting at all.

    Instead of saying "here's a photo, make it look like a painting" they are saying "here's one image, now make this other image look sort of like the first one."


  • by Greenrider ( 451799 ) on Tuesday June 26, 2001 @09:32AM (#127376)
    Terrific...a computer that behaves like Van Gogh.

    Next thing you know I'll come home from work to find that my PC has severed its own mouse cord in a fit of psychosis.
  • The underlying techniques are actually pretty widely used already in image processing and pattern recognition. And, yes, similar techniques are used in text analysis and many other areas. This is mainly a particularly neat demonstration and application of them.

    How useful this particular application is remains to be seen; most people probably have a harder time giving an example of a van Gogh filter for the system to learn from than to use a canned filter.

  • by Keith Handy ( 456832 ) on Tuesday June 26, 2001 @11:33AM (#127380) Homepage Journal
    You're correct that it's impossible to increase *actual* resolution, but in the cases of art and music it's often quite desirable to *simulate* increased resolution. Much is done with lo-fi music to enhance the perceived high frequencies to make it more pleasing to the ear, though this doesn't restore actual high frequency information lost from the original performance. I would assume similar principles are applied to visual art in this case.

We can defeat gravity. The problem is the paperwork involved.