Forgot your password?
typodupeerror
Software Technology

PhotoSketch Image Manipulation Tool Taking the World by Storm 193

Posted by ScuttleMonkey
from the photoshop-forum-goofs-rejoice dept.
PhotoSketch, a new image manipulation program that combines stick-figure sketches, internet image search and pattern matching, seems to be spreading like wildfire. Created by five Chinese students at Tsinghua University and the National University of Singapore, the tool takes a basic sketch and simple labels and turns it into a polished image. "Although online image search generates many inappropriate results, our system is able to automatically select suitable photographs to generate a high quality composition, using a filtering scheme to exclude undesirable images," say the PhotoSketch team in an abstract outlining the tool. "We also provide a novel image blending algorithm to allow seamless image composition. Each blending result is given a numeric score, allowing us to find an optimal combination of discovered images. Experimental results show the method is very successful."
This discussion has been archived. No new comments can be posted.

PhotoSketch Image Manipulation Tool Taking the World by Storm

Comments Filter:
  • correct links (Score:5, Informative)

    by sopssa (1498795) * <sopssa@email.com> on Friday October 09, 2009 @01:53PM (#29696053) Journal

    Since the link to homepage in the article is some old-dated one, here's a correct one:

    And the binaries [tsinghua.edu.cn] (it's a few command line programs, so no fancy UI)

  • Sketched (Score:4, Funny)

    by Iriscal (1563535) on Friday October 09, 2009 @01:56PM (#29696089)
    This image looks sketched. I can tell from a few of the pixels, and from having seen a few sketches in my time.
  • by jo42 (227475) on Friday October 09, 2009 @01:56PM (#29696091) Homepage

    If you sketch a big circle and two hands, will it come up with goatse?

  • by al0ha (1262684) on Friday October 09, 2009 @01:56PM (#29696101) Journal
    The authors of the program--Tao Chen, Ming-Ming Cheng, Ping Tan, Ariel Shamir, and Shi-Min Hu at the Department of Computer Science and Technology, Tsinghua University, and the National University of Singapore--presented it at Siggraph Asia 2009.

    An event that will be remembered forever in the History of Humanity as the day in which a million dorks were finally able to put themselves in X-rated positions with Megan Fox.
    • by NotBornYesterday (1093817) * on Friday October 09, 2009 @02:20PM (#29696413) Journal

      An event that will be remembered forever in the History of Humanity as the day in which a million dorks were finally able to put themselves in X-rated positions with Megan Fox.

      Many with decent Photoshop skills already can, this just lets millions more into the club without the need for a little know-how.

      On a serious note, if this just outputs a flat .bmp, or .jpg, I just give it a "cool and fun, but not really useful". If this thing can output a .psd or .xcf with each element on a discrete layer, that would be excellent.

      • by adonoman (624929) on Friday October 09, 2009 @02:38PM (#29696613)
        Well, this being a research project / proof of concept type of thing, it's probably going to be bought up by a larger company (Microsoft, Google, Adobe) and made into a more useful bit of software. The actual output of this app is irrelevant - even if they composite the images into a flat image, at some point in time they've isolated the components and getting those components into different layers of some other image format is really a trivial extension. The important parts are really pulling useful images off the internet, and pulling together the important parts of those images.
        • by mikael (484)

          They have so many directions to go in - extend it to work with video (just a sequence of images once you get past the codec part). What about that software Microsoft had written to combine separate pictures into a single image?

        • by Wraithlyn (133796) on Friday October 09, 2009 @04:02PM (#29697743)

          I'm amazed at how well this seems to automatically extract subjects from their background, something that usually requires a lot of painstaking manual work... honestly that's the real challenge of "photoshopping", becoming a ninja with the selection tools.

          • by pjt33 (739471)

            Selection, perspective, and lighting.

          • by HuguesT (84078)

            That is because Photoshop selection tools plainly suck. Look up grabcuts [microsoft.com].

            Grabcuts is what this tool uses.

          • by Jah-Wren Ryel (80510) on Friday October 09, 2009 @10:16PM (#29700923)

            I'm amazed at how well this seems to automatically extract subjects from their background, something that usually requires a lot of painstaking manual work... honestly that's the real challenge of "photoshopping", becoming a ninja with the selection tools.

            The reason the software is a binary distribution is because it is actually sending the images to hundreds of thousands of chinese prisoners who are being made to use pirated copies of photoshop to select out the figures from the backgrounds and then send the results back.

      • by Rei (128717)

        I wonder how well it does if you don't give it realistic shapes (say, if you just draw a circle for everything)? If so, it'd be neat to have it draw elements from an RSS news feed, broken down into noun phrases and verbs, with descriptor-less nouns being combined with the verbs. I.e., it'd be a "Guess which news story gave you this crazy picture?" program.

        "Surprised, humbled Obama awarded Nobel Peace Prize" -> "surprised, humbled Obama" "awarded" "nobel peace prize"
        "Obama's Nobel: The Last Thing He Nee

      • Re: (Score:3, Insightful)

        by NotQuiteReal (608241)
        On a serious note, if this just outputs a flat .bmp, or .jpg, I just give it a "cool and fun, but not really useful". If this thing can output a .psd or .xcf with each element on a discrete layer, that would be excellent.

        And a copyright release form. Or are snippets of other images non-infringing use?

        In other words, it probably doesn't matter what the output format it, it will just be "cool and fun", but not for redistribution.
        • by Knuckles (8964)

          If the photo agencies like iStock are smart they are going going to buy this, develop it and include an online shop.

      • Re: (Score:3, Insightful)

        by LS (57954)

        I just give it a "cool and fun, but not really useful"

        I beg to differ. If the usage of this tool reaches high enough numbers, you will have a system in place for tagging a massive number of images with meta-data (both textual and symbolic), making image search MUCH more powerful. This system would get better with time, and would enhance other systems, if the collected data is utilized appropriately.

        LS

    • Re: (Score:3, Informative)

      by Dahamma (304068)

      Come on, don't plagiarize! At least give credit to Gizmodo for your cut and paste.

      http://gizmodo.com/5374890/this-is-a-photoshop-and-it-blew-my-mind [gizmodo.com]

  • OH GREAT (Score:5, Funny)

    by mujadaddy (1238164) on Friday October 09, 2009 @01:58PM (#29696127)
    Now NO ONE will believe the pics of me with Jessica Biel, Kate Beckinsale and Dolly Madison are real!
  • by Anonymous Coward on Friday October 09, 2009 @02:00PM (#29696143)

    This will make things way easier for Iran and North Korea.

  • How does it know which part of the photographs to mask out prior to composition? Have they pre-masked all the images in its database?

    • by BobMcD (601576)

      Watch the video. It is pretty impressive.

    • Re: (Score:3, Informative)

      by Carthag (643047)

      It says in the Vimeo link. Not gonna summarize it cause just look at the damn thing

    • Re:How does it mask? (Score:5, Informative)

      by StreetStealth (980200) on Friday October 09, 2009 @02:15PM (#29696347) Journal

      It appears from the video that it's running a fairly sophisticated series of algorithms to compare backgrounds and determine how difficult it would be to do a convincing mask-out of the foreground object, of which it appears to have a sort of heuristic expectation of shape from the user's sketch.

      For instance, if your background is a grassy field, the user has requested a dog running, and you have a photo of a dog running over grass and a dog running over pavement, the grass one will allow a greater margin of error in the masking and thus it gets selected.

      Overall, this looks like a fantastic step forward for computer vision, bringing the computer ever closer to the non-Cartesian way our brains see.

      • by xaxa (988988)

        This is the kind of thing that inspires me to do a PhD in computer science. I think all the techniques in the video were covered at some level in lectures at university (mostly this course [ic.ac.uk], IIRC) but seeing so much of it working in sequence with real photographs is impressive. (Of course, I saw impressive stuff at university. And this example is cool, whereas the most impressive computer vision work at Imperial College is medical, and I was too squeamish and usually distracted by the blood and skeletons.)

        Al

  • by BadAnalogyGuy (945258) <BadAnalogyGuy@gmail.com> on Friday October 09, 2009 @02:02PM (#29696171)

    I tried to draw a picture of a man with an erection. I labeled him "porn guy". Then I drew a picture of a woman with her mouth open and labeled her "porn whore cumshot".

    The composite picture was fine except that the man and the woman were far apart from each other. In addition, even if I were to draw them closer together (hey, I'm working with a mouse here), the result would still have been sized incorrectly.

    This technology holds lots of promise and is already pretty cool. I hope they can work out the kinks.

  • This is unbelievable (Score:5, Interesting)

    by rehtonAesoohC (954490) on Friday October 09, 2009 @02:05PM (#29696209) Journal
    The reality is that it was only a matter of time before someone came up with something like this, with examples like Microsoft Photosynth [livelabs.com], but this is an unbelievable implementation.

    I'm not 100% sure, but I can definitely see the potential for Google to snatch this up really fast and incorporate it into Picasa or even google image search or something. The fact that something like this allows anyone (not just artists) to come up with novel images with minimal effort is fantastic. I do wonder how canned the images were though. IE: did they GIS for an image first, then use the image as a basis to draw the stick figure, knowing that their algorithm would pick the image they selected in the first place? I would like to see a live demo with an unplanned audience member doing the drawing. Then I'll really be impressed.
  • by capt.Hij (318203) on Friday October 09, 2009 @02:06PM (#29696233) Homepage Journal
    In related news anyone supporting current copyright laws have reinvigorated the economy after having to go out and purchase new pants. Cue the next great debate about copyright as we continue to try to shoe horn old ideas into the new world.
    • Isn't this the visual equivalent of a mashup?

      Aren't mashups already in a copyright gray area?

      • Re: (Score:3, Informative)

        by LordNimon (85072)
        No. Mashups are clearly derived works, which fall under copyright quite clearly. Since this is in China, I'm pretty they're ignoring an IP laws and will probably get away with it. In the U.S., however, every one of those images better be licensed for royalty-free distribution, or they'd be sued.
      • by Jah-Wren Ryel (80510) on Friday October 09, 2009 @04:32PM (#29698151)

        Isn't this the visual equivalent of a mashup?

        Aren't mashups already in a copyright gray area?

        In the US we have this really fucked up way of dealing with derivative works - the more complicated the work, the less of it needs to be incorporated into another work before it is considered an infringement. Yes, that's right the more information in the original the smaller the percentage of that information is required to disqualify any fair use defense.

        So you can quote a couple of lines of a short poem in a book or even have a character speak them in a movie and that's generally OK. But sample just 3 notes [wikipedia.org] of another song and you are in deep doodoo. Similarly, any background artwork in a movie - simply just pictures hanging on the wall in the background of a scene and thus mostly out of focus and of very low effective resolution require clearance and licensing fees, frequently absurdly high fees and of course just about any clip of video used in another movie or show - even on a television in the background of a scene - is going to require licensing too.

        Most hollywood studies have an entire division devoted just handling these clearances (look in the credits for most movies and you'll see at least one person credited as head of the clearances group). This practice has the effect of keeping the "little guys" out of the motion picture business similar to the way patent pools are used to squash tech start-ups - all the studios have large "pools" of our culture under their copyright and the independent artist can't afford to license any of it for his work while the other studios can make each other sweetheart deals that guarantee cheap and easy access to each studio's "pool" of culture.

        So no, mash-ups, since they generally are 100% composed of samples of other songs, aren't anywhere near being gray in the USA.

      • The word you're looking for is "collage".

      • Re: (Score:3, Funny)

        by dangitman (862676)

        Aren't mashups already in a gray area?

        The problem is that you're letting your potatoes get exposed to air for too long before mashing them. Submerge them in iced water prior to mashing, and add some sour cream to the mash, then your mashup will have a creamy texture and clean white color.

  • by thewils (463314) on Friday October 09, 2009 @02:08PM (#29696259) Journal

    Tubgirl and goatse.cs are gonna crash.

  • by fluor2 (242824) on Friday October 09, 2009 @02:14PM (#29696341)

    Soon I can write a story and then I just compile it and it will show sniplets of existing movies or rendered characters and woha it's converted to a real movie even with end credits: Directed and written by ME ME
    Oh I can't wait.

    • by buswolley (591500)
      You might be right. Except your characters will be changing faces/bodoes all the time. That might be fine if youre doing a remake of a ..ah damn what the name of that film again...my memory is shit.
    • by Fozzyuw (950608)

      This is the internet... it'll soon be used for 90% porn fanfic.

    • There's *always* some retard, who screams that it's the end. THEEE EEEEND.
      Congratulations. YOU'RE WINNER! [youtube.com] (explanaiton [wikipedia.org])

    • by GaryOlson (737642)

      Directed and written by ME ME

      Tom Cruise, is that you?

    • Yeah. That could work.

      --Face recognition software ought to be able to replace actors heads with yours in all the appropriate places, and get the lighting correct, etc. As well as being able to re-map whole environments according to taste.

      With formula writing being what it is, you could probably even set up algorithms capable of recognizing romantic scenes versus actions sequences. Humans are quite predictable as to what they respond to. It sounds like you could just punch in, "Romantic comedy, straight

    • by Jesus_666 (702802)
      So you say that anything that allows a writer to quickly go from a script to a movie mockup impedes creativity? I'd say it greatly enables it - with a program like that one could quickly generate animated storyboards, which isn't just awesome (thus enticing more people to do creative writing as a hobby) but also useful if you want to pitch your idea to someone as you can give them a rough impression of how the final result would look.

      Sorry, but I can't see your concept as anythng but awesome. Sure, there'
  • by IamTheRealMike (537420) <mike@plan99.net> on Friday October 09, 2009 @02:15PM (#29696351) Homepage
    That's pretty damn cool. It reminds me of scene completion [cmu.edu], which is another take on the same idea - combining images from Flickr to create new images according to a brief sketch.
    • by Judebert (147131)

      I had the same thought. I'll bet they're using a similar algorithm, with Google Image Search providing the images from the tags, and the sketch providing the foreground/background information.

  • by jemtallon (1125407) on Friday October 09, 2009 @02:15PM (#29696353) Journal
    Having just taken a quick look through the config files and readme from the binary.zip file, it's pretty obvious this is very much a Proof of Concept release. You need to hard-code the number of sketched items, label them each in the config file, download the potential matched images to a specified directory, etc. It involves enough guess-work and too little documentation for me to proceed further, which is unfortunate. Has anyone else actually gotten it to work as described to confirm it does what it claims it can?
    • by GigaHurtsMyRobot (1143329) on Friday October 09, 2009 @03:57PM (#29697681) Journal
      I don't think all of the binaries were included. After getting the right CV kit, the error is that some applications aren't there.

      You make a Workspace folder and but a .jpg for each "item" with a simple line drawing. You make an ImageDownload folder, and in it a folder for each "item" with downloaded images in those folders.

      It's missing exe's for these two lines from the ini file:

      AttentioinCut = true

      ShapMatching = true

      They can be switched to false and it will run and create data in the Workspace folder. The images in the Segment folder are neat.... but looks like the executable to actually stitch the images together was not included.

    • by mugnyte (203225)

      Yes, I've gotten it to run, partially. I think most of this project is somewhat interesting compositing around a bunch of manual masking, actually.

      Got CV bin's of 1.0 (renamed to 110)

      PhotoSketch.exe still bombs, but i'm able to piece together some behavior with the other exe's.

      I went to Google images myself and downloaded 10 jpg's for "cowboy hat","spaceship","domo kun","stormtrooper","courtyard" and put them into a "c:\photosketch\download" dir

      Then ran "segmentation.exe c:\photosketch\dom

      • by mugnyte (203225)

          Going by the sample output [gizmodo.com] they published, I'm a little doubtful this was attained with just a "doodle and compile" concept. This from a short bit of time with the binaries. IF anyone can create anything similar, let me know.

      • by josath (460165)
        Uh...instead of downloading OpenCV 1.0 and renaming the files to 1.1, why not download OpenCV 1.1 in the first place? msvcr* is the MS Visual C++ runtime...you can usually find it pretty quick just by googling for the DLL name (software devs that know what they are doing, redistribute it with their app...but obviously these guys just hacked it together really quick to make it work ok enough to meet their deadlines). Hopefully they package it up nicely...but I doubt it. Nine times out of ten, these cool pa
  • Although online image search generates many inappropriate results, our system is able to automatically select suitable photographs to generate a high quality composition, using a filtering scheme to exclude undesirable images

    Sounds like they took all the fun out of it to me....

  • 1) Open Source?
    2) Could the algorithm be used to find existing images similar to the one you just drew?
    3) When is a demo of this thing available?
  • Hoax? (Score:3, Insightful)

    by skeeto (1138903) on Friday October 09, 2009 @02:27PM (#29696497)

    This seems to be either a hoax or will be extremely limited in ways they aren't discussing, as to have little use. If the examples they are showing are real, the image data set they are pulling from must have been manually processed and adorned with hand-made metadata.

    This falls too much into the "too good to be true" category for me to believe it.

    • by grumbel (592662)

      Yep, the object masking seems a little to good to be true, the composition part itself on the other side seems doable once you have the objects masked out.

      One thing however to keep in mind is that they have all the Internet to look for images, so they might only pick those images that make masking out objects easy and ignore the tricky ones.

    • by Bigjeff5 (1143585)

      This seems to be either a hoax or will be extremely limited in ways they aren't discussing, as to have little use.

      Why? What are you basing that assessment on?

      If the examples they are showing are real, the image data set they are pulling from must have been manually processed and adorned with hand-made metadata.

      Why? From what I understand, the program currently needs to be configured for each sketch by hand (how many points to process, what the criteria are for the search, etc) but I've seen nothing that suggests something a program could not be written to process automatically, and I didn't see where they had to customize the meta-data for the photos to get their software to process it correctly.

      Skepticism is healthy, but it doesn't really help anything to just make stu

      • by mugnyte (203225)

        The paper is interesting, but if you look at the source code they published, their algorithms are doing terrible segmentation (at least on my samples).

        Also, each image must be brought local, segmented, then labeled with appropriate text matches for each segment. This is nontrivial, and akin to splitting apart of images and then associating text with each cutout.

        Then you search by keyword, find the segmentation with the best fit to your doodle, and import.

        They have some inter

      • Re: (Score:3, Interesting)

        by skeeto (1138903)

        Why? What are you basing that assessment on?

        I'm drawing on my experience with image processing and what I know of it's current capabilities. I've worked on programs to do this sort of thing. So, yes, I "know some shit" about this subject.

        We're just not yet at the point where a program could be given an arbitrary image and have it recognize a wide variety of objects in the image. For example, take a single picture of a dog in a park (that is, no stereo imagery, or video, here): we don't have algorithms yet that can recognize the dog as an individual o

  • The application is very impressive, as far as the video goes. It shows that a human process of recombining existing material based on a hunch.

    Problem is, searches (for base data set) for CC Share alike / commercially usable is a best spotty (many artists don't care much about explaining the image rights, and most others are jerks).

    So in practice, this will only be useful for private entertainment, maybe prototyping, but not for professional use.

    It's a great idea, actually a pretty innovative one, but it wil

    • by grcumb (781340)

      The application is very impressive, as far as the video goes. It shows that a human process of recombining existing material based on a hunch.

      Problem is, searches (for base data set) for CC Share alike / commercially usable is a best spotty (many artists don't care much about explaining the image rights, and most others are jerks).

      So in practice, this will only be useful for private entertainment, maybe prototyping, but not for professional use.

      Ummm, have you considered the possibility that digital artists might want to use it on their own image collections?

      I could see this being a positively revolutionary tool for folks like Weta: Sketch out a story board, make a few wireframe animations, then map it onto existing collections of photographed/filmed material. Magical, says I.

      On top of that, who's to say that this wouldn't spur a whole re-use regime, where people would be paid well to create source materials?

  • by MaraDNS (1629201) on Friday October 09, 2009 @02:36PM (#29696597) Homepage Journal

    An interesting point: This research is being done in China, not the United States. Whatever happened to basic research being done in the US? Today's PARC laboratory is not in the US, but appears is in China.

    This is not a good thing for people who live in the US. America's increasing dependence on outsourcing is destroying the US' capability to be competitive in today's environment.

    The Harvard Business Review has an excellent article [harvardbusiness.org] about how America is destorying its own future.

    • Re: (Score:2, Insightful)

      by Anonymous Coward

      America's core competency used to be research and innovative application.

      Unfortunately, copyright and patent laws intended to protect the small from the big ended up getting flipped around. Now those laws are used by the big to crush the small, and along with it all the innovation that might produce a hint of competition.

      America's future is bleak indeed unless one of two things happen:

      1. A tremendous amount of domestic-only jobs (such as commercial driving from one US location to another US location) are cr

    • by PhysicsPhil (880677) on Friday October 09, 2009 @04:11PM (#29697855)

      Give the Chinese credit where it's due. Setting aside any arguments about how Americans don't value science and technology any more, to expect China not to produce good research is foolish. It is a large country that is putting resources into science and technology. Combined with the fact that stricter immigration laws make the United States a less desirable place for overseas students to study it's not a surprise. Based strictly on relative populations of China and America, we should be asking why the Chinese aren't producing even more groundbreaking work.

      Americans forget that one of the main reasons they were the top dog in science and technology was because most of the world's population was doing subsistence farming. The kids of those farmers are now becoming scientists and engineers, and there is real competition now.

    • Re: (Score:3, Insightful)

      by CheeseTroll (696413)

      Not that I disagree about the US slipping in investing in basic research, but there are highly intelligent people in other countries, too. Innovation is not a zero-sum game.

    • by olau (314197)

      The fact that a Chinese university is doing cutting-edge research is a good thing for you Americans. That means they're getting richer, thus a growing market for the pop culture products and Hollywood entertainment you're so good at exporting. Maybe 80% of the entertainment in the telly here in Denmark (in Europe) is from the US.

      Now you just need to teach them to abide your copyrights. Maybe they can teach you how to eat vegetables in return. Fix an obesity problem or two, eh?

    • OMG. We've got a mashup gap.

      Stop the presses, start the air raid sirens. Let's get another government agency on this quick!
    • An interesting point: This research is being done in China, not the United States. Whatever happened to basic research being done in the US? Today's PARC laboratory is not in the US, but appears is in China.

      You see one interesting research project out of China and conclude the US is doomed? What about things like Photosynth, from University of Washington and Microsoft Research, both of which are in the US? Just look around here, or in the science and technology sections at Reddit, and you'll find plenty of stories about US basic research.

      • by lewiscr (3314)
        I live in the US, and therefore conclude that the US is doomed. Don't you watch the news?!?
    • by Blakey Rat (99501)

      An interesting point: This research is being done in China, not the United States. Whatever happened to basic research being done in the US?

      One well-known research project is done in China, therefore there is no basic research being done in the US!!!
      Brilliant deduction there, Sherlock.

    • Yeah, while China is leading in LOLcat generation, those lazy Americans are just reconstructing entire cities in 3D... http://grail.cs.washington.edu/rome/ [washington.edu]

    • by dbIII (701233)

      Whatever happened to basic research being done in the US?

      As a country you guys hate science and scientists now and have been heading down the road of anti-intellectualism since at least Reagan. Many people hated Carter and Clinton simply because they were well educated - hence the dumb cowboy act of the Ivy League educated previous President. Now since you can't get technology without science and immigration and entry conditions discourage the best overseas talent that gave you Silicon Valley, things are

    • Re: (Score:3, Informative)

      by FleaPlus (6935)

      An interesting point: This research is being done in China, not the United States. Whatever happened to basic research being done in the US?

      This is cool research and all, but it's not like it happened in a vacuum. Below is a copy of the references from the PhotoSketch paper, showing the prior work the current paper was built upon, the vast majority of which are from labs in the US or Europe:

      BELONGIE, S., MALIK, J., AND PUZICHA, J. 2002. Shape match-ing and object recognition using shape contexts. IEEE Trans.Pattern Anal. Mach. Intell. 24, 4, 509-522.
      BEN-HAIM, N., BABENKO, B., AND BELONGIE, S. 2006.Improvingweb-based image search via content based clustering.In Proc. of CVPR Workshop.
      DIAKOPOULOS, N., ESSA, I., AND JAIN, R. 2004. Content basedimage synthesis. In Proc. of International Conference on Imageand Video Retrieval (CIVR).EITZ, M., HILDEBRAND, K., BOUBEKEUR, T., AND ALEXA, M.2009. Photosketch: A sketch based image query and composit-ing system. In SIGGRAPH 2009 Talk Program.
      FARBMAN, Z., HOFFER, G., LIPMAN, Y., COHEN-OR, D., ANDLISCHINSKI, D. 2009. Coordinates for instant image cloning.SIGGRAPH 2009.
      FELZENSZWALB, P. F., AND HUTTENLOCHER, D. P. 2004. Effi-cient graph-based image segmentation. Int. J. of Comput. Vision59, 2, 167-181.
      FERGUS, R., FEI-FEI, L., PERONA, P., AND ZISSERMAN, A.2005. Learning object categories from google's image search.In Proc. of ICCV.
      GEORGESCU, B., SHIMSHONI, I., AND MEER, P. 2003. Meanshift based clustering in high dimensions: A texture classifica-tion example. In Proc. of ICCV.
      HAYS, J. H., AND EFROS, A. A. 2007. Scene completion usingmillions of photographs. SIGGRAPH 2007.
      HOU, X., AND ZHANG, L. 2007. Saliency detection: A spectralresidual approach. In Proc. of CVPR.
      JACOBS, C., FINKELSTEIN, A., AND SALESIN, D. 1995. Fastmultiresolution image querying. In SIGGRAPH 1995.
      JIA, J., SUN, J., TANG, C.-K., AND SHUM, H.-Y. 2006. Drag-and-drop pasting. SIGGRAPH 2004.
      JOHNSON, M., BROSTOW, G. J., SHOTTON, J., ARANDJELOVI C,O., KWATRA, V., AND CIPOLLA, R. 2006. Semantic photosynthesis. Proc. of Eurographics.
      LALONDE, J.-F., HOIEM, D., EFROS, A. A., ROTHER, C.,WINN, J., AND CRIMINISI, A. 2007. Photo clip art. SIG-GRAPH 2007.
      LEVIN, A., LISCHINSKI, D., AND WEISS, Y. 2008. A closed-form solution to natural image matting. IEEE Trans. PatternAnal. Mach. Intell. 30, 2, 228-242.
      LI, Y., SUN, J., TANG, C.-K., AND SHUM, H.-Y. 2004. Lazysnapping. SIGGRAPH 2004.LIU, T., SUN, J., ZHENG, N.-N., TANG, X., AND SHUM, H.-Y.2007. Learning to detect a salient object. In Proc. of CVPR.
      MANJUNATH, B. S., AND MA, W. Y. 1996. Texture features forbrowsing and retrieval of image data. IEEE Trans. Pattern Anal.Mach. Intell. 18, 8, 837-842.
      PEREZ, P., GANGNET, M., AND BLAKE, A. 2003. Poisson imageediting. SIGGRAPH 2003.RAJENDRAN, R., AND CHANG, S. 2000. Image retrieval withsketches and compositions. In Proc. of International Conferenceon Multimedia & Expo (ICME).
      ROTHER, C., KOLMOGOROV, V., AND BLAKE, A. 2004. "grab-cut": interactive foreground extraction using iterated graph cuts.SIGGRAPH2004.
      SAXENA, A., CHUNG, S. H., AND NG, A. Y. 2008. 3-d depthreconstruction from a single still image. Int. J. of Comput. Vision76, 1, 53-69.
      SMEULDERS, A., WORRING, M., SANTINI, S., GUPTA, A., ANDJAIN, R. 2000. Content-based image retrieval at the end ofthe early years. IEEE Trans. Pattern Anal. Mach. Intell. 22, 12,1349-1380.
      WANG, J., AND COHEN, M. 2007. Simultaneous matting andcompositing. In Proc. of CVPR, 1-8

  • by tedgyz (515156) *

    Great! They create the perfect pr0n tool and disable the feature. I'll wait for the haxx0red version from Russia.

  • Pics or it didn't happen.

  • xkcd (Score:5, Interesting)

    by TheGreatOrangePeel (618581) on Friday October 09, 2009 @02:57PM (#29696881) Homepage
    Wow. I'm instantly instilled with the urge to plug an XKCD comic into this and see what happens.
  • by benjfowler (239527) on Friday October 09, 2009 @03:03PM (#29696993)

    I suppose it won't work if you try sketching the Dalai Lama?

    I've heard of academic projects on filtering out porn (Australian military didn't want people surfing smut on the clock). I'd imagine that filtering out pics of the Dalai Lama would be harder...

  • Compositions of copyrighted and uncleared images; just what we need. I assume those students who created it were law students?

  • Ive spent months on 4chan trying to find hillarious pictures of zombie-dino-anonymous-jesus-hitler goatse that just DO NOT EXIST. With the latest technology from the interweb i can save several hours of stolen photoshop work and just sketch with the only hand i have above the desk the exact image of my pope-toilet-donkey-satan-pedobear-moneyshot-mobile. sauce, included.
  • why am i reminded of stalin getting people airbrushed out of photos?

    and how long until someone uses it to put obama sharing a meal with osama bin-laden?

  • by zindorsky (710179) <zindorsky@gmail.com> on Friday October 09, 2009 @03:24PM (#29697271)

    Someone should take all the XKCD comics, mark 'em up a bit, turn 'em into nice pictures, and .... Profit!!

The number of arguments is unimportant unless some of them are correct. -- Ralph Hartley

Working...