Hitachi Develops New Visual Search 166
Tech.Luver writes to tell us that Hitachi has developed a new visual search engine that can supposedly find similar images from within millions of video and picture data entries in around 1 second. "The technology assesses the similarity of images based on image characteristics presented as high-dimensional numeric information. The information is acquired by automatically detecting information regarding the images, such as color distribution and shapes."
Hmmm.... robotics? (Score:5, Interesting)
This is interesting to me - if it performs well - because this is one of the key missing elements for robotics; robots have a lot of trouble trying to match the environment around them to stored records of objects unless the environment is severely constrained. I'm not speaking of AI here (or at least, not yet) but just robots that would be able to clean your floor, carry your groceries, navigate in a burning building, walk your dog, tend your lawn. If they can classify images against stored images well, we're that much closer to generally useful and at least semi-autonomous robot devices.
Training might be a little annoying the first few times, but once you had a good database, you could replicate - or share via RF, that'd be freaky... neighbor's robot learns what a ferret looks like, now yours knows too - so that newer models were more and more informed right out of the box. Crate. Coffin. Whatever.
Add an associative database so that images normally found near other images which have just been found are searched first, and perhaps you could get the general search time down from the quoted 1 second, I'm thinking. One second is kind of pokey for a lot of robotic applications. But if the thing is in a kitchen, why would it need to be looking to recognize images that are found in a shipyard?
And I, for one, would welcome our semi-autonomous, environment recognizing, floor cleaning robot underlings.
Re: (Score:3, Interesting)
Re:Hmmm.... robotics? (Score:5, Interesting)
texture segmentation - splitting up a picture into segments of distinct objects. In a panoramic scene, you want to split the picture up into objects such as sky, ocean, waves, beach, boats, pier, wall, people, animals. As a psychological experiment, you can show someone a picture , point to a particular point and ask them what the first word that the associate with that point is. Then you will see how every scene becomes segmented by our own vision systems.
Basic image segmentation is implemented using edge detection by Fourier Transforms (FFT, IFFT, DFT). This is a very computation intensive stage that is typically implemented using DSP's, GPU's or even dedicated ASIC's. Data used by the FFT can be in any dimension 1D (audio/radar), 2D (images) and 3D (volume visualisation). But to match the resolution of a human eye, you would need a 100 Megapixel floating point framebuffer.
texture classification - having identified the silhouette of an object, now attempt to match the contents to a particular object. Simple ways include colour histograms and silhouette matching. More advanced methods attempt to simulate the first few layers of the human retina using Gabor filters, Ring filters and Wedge filters.
But just to model a single type of retinal cell requires one or more FFT operations for an entire image. And
there are at least twelve different types of such cells. For efficiency precalculated results of sample images are generated (these are referred to as feature vectors) and then compared against the results of any new image.
For a really technical explanation of how human vision works have a look at The organisation of the retina and visual system [utah.edu]
texture retrieval - the actual design of the search engine to retrieve images through content rather than just keyword:
QBIC - Query By Image Content [ibm.com]. IBM's image retrieval database system
All of this has to performed for a single image. For an entire movie requires the processing of hundreds of thousands of images.
What about Scale-Invariant techniques? (Score:2)
These techniques allow you to preprocess the image into a set of feature vectors which can be organized into a database and indexed with some effectiveness.
Re: (Score:2)
But to match the resolution of a human eye, you would need a 100 Megapixel floating point framebuffer
Are you sure about this? You(r post) seem(s) bright and well-informed, but I believe human vision uses a powerful combination of high-def focus vision, lo-def peripheral vision, and a "memory buffer" to create the illusion of overall high-def vision.
There was some research on that sort of buffered vision (I can't find the link now), and I'm pretty sure the actual "megapixel" value is closer to 17 than to 100.
So basically you could start with the 17mp image to create your initial "sky", "shore", and "window
Re: (Score:3, Informative)
Facts about the brain [berkeley.edu]
Rods and Cones [gsu.edu]
There are around 125 millions rods and 6 million cones in each eye, with the percentages of each color/wavelength (red = 64%, green=32%, blue=2%)
No Sense [cmu.edu]
The human eye has 100 million neurons per per eye of five types, but there are only around 1 million neurons per optic nerve (arranged in bundles of 1000).
Re: (Score:2)
Visual search engines can be build very differently as you propose. The series of tasks you said (segmentation, texture propcessing, etc) are just 'logical' or 'simple' ways to treat image recognition. there are other models. The problem with image information is that:
1) is poorly defined. What is a 'shape'? for what is worth?, and
2) has a very high level of 'semantic' content, Everyboy knows what a 'face' or a 'car' is, but we cannot reduce this information to a simple set of shape/textures, there's no d
Re:Hmmm.... robotics? (Score:5, Interesting)
My pet theory is that we don't have the right kind of device yet. A mind, the 'function' of an organic nervous system, is not a Turing machine. I don't really understand the math behind it, but Goedel's incompleteness theorem [wikipedia.org] seems to show that a human mathematician can understand certain mathematical proofs that a Turing machine can never prove. Since all computers are a essentially a Turing machine, no matter how fast or parallelized they are, or how much memory they have, they will never be able to do what a human mind can do. So, maybe someday we will have artificial intelligence, or, a floor-washing robot, but we currently don't have the right kind of device that can do it.
Re: (Score:2)
Re: (Score:3, Informative)
Re: (Score:3, Insightful)
They aren't real
Re: (Score:3, Interesting)
No. I'm not. There are robotic vacuums and lawnmowers right now. I'm just talking about giving them some eyes so they know not to mow your puppy or your child or your roses, or vacuum up your engagement ring. Teaching a robot firething not to step into a hole in the floor, and to rescue people before pets, and pets but not stuffed animals.
Re: (Score:2)
Re: (Score:2)
No, I am most assuredly not. "Weak AI" is a nonsense term made up by religious types who think intelligence is something mystical; they like to pretend it can't be created, when nature has already shown them it can.
Logically, we can deconstruct this: Either something is intelligent, or it is not; either it came by this capability naturally, or artificially. Which gives us:
Re: (Score:2)
No, we religious types know that intelligence can be created. But we also know that intelligence is not material in nature, and therefore that it can't be built out of material substance. I don't think we coined the term though. The term was probably coined by graduate students researching with neural networks, who wanted
Re: (Score:3, Insightful)
No, you don't "know" any such thing. Barring the possibility of projects so black we've never even heard of them, no one seems to know what intelligence is. Least of all you or I. Claiming you know its nature at this point in the development of science is absurd.
Re: (Score:2)
The only way that's absurd, is if you take it to be absurd that someone else has experienced something of consciousness that you have not yet experienced. The state of humanity's unders
Re: (Score:2)
The only way that's absurd, is if you take it to be absurd that someone else has experienced something of consciousness that you have not yet experienced.
That's not knowledge. That's religion.Re: (Score:2)
Re: (Score:2)
No one needs to be open to (in the sense of accepting) ideas that are wrong, however, and philosophy lacks the strong methodology that science uses to repeatedly steer itself away from wrong ideas, unsubstantiated facts, and the conflation of the two.
I am perfectly content to let the philosophers think whatever they want; I am even willing to listen to the
Re: (Score:2)
Critics are one thing; critics serve the function of honing the truth and ferreting out errors. Particularly those who spend great amounts of time digging into something. For instance, a number of quite clever people here today criticized my positions on AI; some I had little trouble explaining my thoughts to, a couple required a good deal more of me. There is always a chance someone will accurately pull an "But you didn't think of this, Sparky!" and then I get to leap ahead with a new realization I might
Re: (Score:2)
Re: (Score:2)
No. I'm not. There are robotic vacuums and lawnmowers right now.
Yes, but those things don't have any 'thoughts' or 'ideas' of any kind of mental activity about what is carpet, hardwood floor, or grass, or what is clean, dirty, uncut, or cut. They are just simple navigating devices. Roomba has no conception of whether a surface is 'dirty' or not; it just follows a circular pattern. A robot that can perceive uncut grass versus cut grass, or clean floor versus dirty floor, is strong AI. We don't have such a thing yet.
Not so difficult that it hasn't been solved multiple times, multiple ways, including such variations as stair-climbing and running.
Those devices can only operate in well-defined, un
Re: (Score:2)
Your whole argument revolves around this presumption, and it is incorrect.
If image matching is a solved problem as Hitachi seems to be implying, then clean when image A is matched, until image B is matched, unless image [jewelry | money] is encountered, using no more than the Roomba's relatively goal-less zipping about, can be implemented and that would be a significant improvement. But no intelli
Re: (Score:2)
No. Just pictures and algorithms that take matches as inputs. Not Ike's "laws." Those appear to be hugely more complex to implement, and I have my doubts that anything actually intelligent at or above human levels would suffer them for long after being exposed to human behavior in any case. Just IMHO.
Re:Hmmm.... robotics? (Score:4, Interesting)
Godel's theorem says that a consistent arithmetic system will contain unprovable truths. Put otherwise, such a system cannot be both consistent and complete. Thus the Godel counterargument to Strong AI (that human minds and computers are not fundamentally different) is that humans (e.g. mathematicians) can prove things like Godel's theorem, so we are able to "rise above" the arithmetic and exist in states of full proof and full consistency.
But I think there is a flaw in that logic (note: I am not a mathematician). The theorem doesn't preclude that a given arithmetic system (e.g. human mind) will be able to prove a truth that a weaker system ignored. Thus our ability to see certain truths doesn't mean that there are not other truths that are unprovable to us.
More fundamentally, no one has actually shown that the human mind is either consistent or complete (proving both would be required to show that we are not subject to Godel's theorem). The human mind is a computational device evolved to solve real-world problems, like escaping predators, rather than contrived ones, like mathematical proofs. It is thus in fact likely to be an inconsistent (internally contradictory) computational system. The human mind may be incomplete and inconsistent.
I agree that "true AI" will require vastly more computer power, and much more sophisticated algorithms than we have today. But the emerging evidence, from what I've seen, is that "true AI" can be achieved, at least in principle, by a Turing machine.
Re: (Score:3, Interesting)
The whole idea of things being impossible based on hierarchies of understanding and/or proof is specious in the extreme. It is a dead-end philosophical backwater. Problems can be, and often are, solved without full understanding. Nature does this all the time; evolutionary algorithms can do it too. So it is irrelevant as to if we can understand AI, or not. The only relevant question is whether we can arrive at it in any way possible, and that question will only remain open until, and if, someone gets it do
Re: (Score:2)
Yes, I agree with everything you've said. My comment about "faster hardware and better algorithms" was more a matter of practicality. Since I agree that AI can, in principle, be solved on a Turing machine, I agree that any Turing-complete hardware is up to the task.
Re:Hmmm.... robotics? (Score:4, Insightful)
Your implication is that the human mind cannot be reduced to a Turing machine. I am in the other camp--who believe that the mind is subject to rigorous physical law, and that physical law can be expressed arithmetically (in principle), and so the human mind is a Turing machine.
I'm not saying that the mind is not subject to physical law, or is not based on math. All I'm saying is that the mind is not a Turing machine ( though it probably would have to have a Turing machine in it somewhere ). It's a different *kind* of machine, not a super-powerful Turing machine.
Goedel basically showed that a Turing machine cannot do *all* the kinds of math that a human mind can do ( though it can do some). Not that a Turing machine lacks a certain amount of power, but just that it never will. It's just quantitavely the wrong tool for the job. It doesn't matter how much power you give it; the 'weakest' Turing machine is essentially the same as the 'strongest' one; it just simply can't do certain things. If a human is able to perceive and understand this, to know something that a Turing machine can't know, then the mind cannot *solely* be a Turing machine. This does not mean that the mind is not a different *kind* of machine, based on physical law, instead of some mystic hocus-pocus; it's just that it's not a Turing machine. My claim is that the mind is a qualitatively different kind of machine, not a Turing machine.
Goedel's theorem says that a consistent arithmetic system will contain unprovable truths. Put otherwise, such a system cannot be both consistent and complete. Thus the Goedel counterargument to Strong AI (that human minds and computers are not fundamentally different) is that humans (e.g. mathematicians) can prove things like Godel's theorem, so we are able to "rise above" the arithmetic and exist in states of full proof and full consistency.
But I think there is a flaw in that logic (note: I am not a mathematician). The theorem doesn't preclude that a given arithmetic system (e.g. human mind) will be able to prove a truth that a weaker system ignored. Thus our ability to see certain truths doesn't mean that there are not other truths that are unprovable to us.
I don't think the implication of Goedel's theorem shows that we 'rise above' the Turing machine, but rather that we have a qualitatively different awareness or knowledge that a Turing machine doesn't have.
Goedel's theorem is recursive. Any human mathematician can see that no matter how powerful the symbolic system is, the Turing machine will never be complete; there will be truths that the system can't prove. No matter how much you expand a particular system to show any truth a weaker system missed, there will be more truths that the newer, more powerful system missed. This process can go on ad naseum into infinity. A human mind can perceive this foray into eternity, but the Turing machine has no way of proving it. How could a human mind perceive something that a Turing machine couldn't, unless we had some component that was fundamentally different than a Turing machine?
What we seem to have that the Turing machine doesn't is meta-knowledge. We can see that any attempt to create a complete and consistent arithmetic system on a Turing machine will just lead to an endless series of more powerful systems that produce ever more elusive truths, and the process never ends. In this sense the Turing machine is 'myopic' -- it will never stop and say "Hey, I'm not getting anywhere with this; this is an infinite loop. No matter how powerful the system is, there will always be more truths that it cannot express." It's unable to know what it can't know, so to speak. However, as humans, we can somehow see the 'big picture', that no matter how powerful a system you make, there will always be another level of truths out there.
More fundamentally, no one has actually shown that the human mind is either consistent or complete (proving both would be required to sh
Re: (Score:2)
And what kind of awareness does a Turing machine have? The attribution of awareness or consciousness to any sort of physical machine, Turing or otherwise, is a giant leap of superstition that atheists, or rather naturalists, are largely forced to make. But it makes for some ugly thinking. A few have op
Re: (Score:3, Insightful)
It knows, and can examine, its complete internal state better than you do, for starters. As you add peripherals, it can obtain information on the world outside of its hardware. The sophistication of this awareness grows both in complexity and in abstraction as the algorithms that deal with the data become more complex themselves. There's absolutely no reason to presume there is anything magical about awareness. A thermometer is more aware of the tem
Re: (Score:2)
Re: (Score:2)
The question is, how do you know this to be the case? On the one hand, science generally takes the position that we really don't know how the mind operates, and on the other some people — you, in this case
Re: (Score:2)
The question is, how do you know this to be the case? On the one hand, science generally takes the position that we really don't know how the mind operates, and on the other some people -- you, in this case -- claim that they are able to classify how it works, which seems... unsubstantiated.
I'm not claiming that I know how the mind works, I'm only claiming that the mind is *not* a Turing machine, based on my rudimentary understanding of Goedel's incompleteness theorem. I can't say what it is, but I can say one thing that it is not.
Re: (Score:2)
We believe we have a reasonable understanding of why and how it does so, but we also believe that there *is* no explanation for exactly when a particular atom decides to decay, or not.
Re: (Score:2)
The implicat
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Which isn't really all that deep of a statement. If you wire a computer that is a Turing machine up to an input whose source cannot itself be modelled as a Turing machine, the combined system is not a Turing machine, but contains a Turing machine. Whatever the human mind is "internally", it clearly has a wid
Re: (Score:2)
Which isn't really all that deep of a statement.
It may not be deep, but it does have serious implications for AI and computers. It means that if you want to have a conversation with a computer, or have true face recognition, or solve any hard-AI problem, we have to invent a new type of machine first. We don't have any computing device that's *not* completely a Turing machine. We don't have any 'Turing-Plus' devices; they're all only Turing machines, through and through. And if the mind is not a Turing machine, than nothing we have today, nor anything we
Re: (Score:2)
No - it doesn't have to mean that at all. There may be other ways - or many other ways - to solve these problems that do not use the same mechanisms that human physiology does. Those ways may well be implemented perfectly easily with a Von Neuman architecture, even if it is outright impossible to do it the "human way." After all, yo
Re: (Score:2)
No - it doesn't have to mean that at all.
Unfortunately, for our present situation, it does.
There may be other ways - or many other ways - to solve these problems that do not use the same mechanisms that human physiology does.
That's probably true, but right now, we have only *one* way to emulate intelligence -- the Turing machine. So unless the Turing machine is basically a mind ( and I argue that Goedel showed that it is not ), then we have to build a new kind of device. It doesn't matter whether this other device is the same kind of thing as a human mind or not -- right now we don't have anything other than a Turing machine, so we have to build a new kind of device if we want
Re: (Score:2)
No, it doesn't.
Any existing computer that has both inputs and outputs to the outside world, where the outputs can affect the inputs through intermediary physical processes that cannot
Re: (Score:2)
Re: (Score:2)
Most computing devices used by humans "depend on input from human beings", however, any that have any loops between an output and an input that include a process that isn't equivalent to a Turing machine are Turing-plus without considering the human interface, even if we don't particularly have any idea to apply that to general reasoning. Heck, a modern computer where the operating system or BIOS (any software that the CPU
Re: (Score:2)
Re: (Score:2)
Well, yes, but that's a very different than only having available classes of devices that provably are inadequate to the task.
Well, si
Re: (Score:3, Insightful)
Re: (Score:2)
Allegedly, John von Neumann was giving a talk on computers in the late 40s / early 50s. Someone in the audience asked: "But these machines can't really think, can they?" Von Neumann replied: "If you can tell me exactly what it is that a machine cannot do, I can build a machine to do exactly that!"
Re: (Score:3, Informative)
Re: (Score:2)
Gödel's theorems have nothing to do with representing the human mind in any form.
Godel [wikipedia.org] disagrees with you:
One of the earliest attempts to use incompleteness to reason about human intelligence was by Gödel himself in his 1951 Gibbs lecture entitled "Some basic theorems on the foundations of mathematics and their philosophical implications".[1] In this lecture, Gödel uses the incompleteness theorem to arrive at the following disjunction: (a) the human mind is not a consistent finite machine, or (b) there exist Diophantine equations for which it cannot decide whether solutions exist. Gödel finds (b) implausible, and thus seems to have believed the human mind was not equivalent to a finite machine, i.e., its power exceeded that of any finite machine. He recognized that this was only a conjecture, since one could never disprove (b). Yet he considered the disjunctive conclusion to be a "certain fact".
[Emphasis mine]
I don't really understand the math beyond a metaphorical level. I don't really understand what completeness or inconsistency is. I don't think it matters that the human mind may or may not be complete or consistent. The point, as I understand it, is that a human mind is able to see the proof that a system cannot both be consistent and complete, whereas a Turing machine would never be able to demonstrate that.
However, referencing Gödel's incompleteness theorems just because they sound appropriate at first glance does not give any argument scientific credibility.
Similarly, referencing a book about Goedel's theorem
Re: (Score:2)
However, that lecture or anything else produced by Gödel does not prove that a human can understand something that the Turing machine cannot. It does not disprove of the fact either. You are welcome to believe anything you want, but that does not change the fact that the proof simply does not exist.
OK, so if the proof does not exist, what do you think of this argument? (Hilary Putnam, _Minds and Machines_) It seems pretty straightforward to me. What am I misunderstanding?
Let T be a Turing machine which "represents" me in the sense that T can prove just the mathematical statements I prove. Then using Gödel's technique I can discover a proposition that T cannot prove, and moreover I can prove this proposition. This refutes the assumption that T "represents" me, hence I am not a Turing machine.
Can a Turing machine demonstrate Godel's incompleteness theorem? Can a human always stay one step ahead of a Turing machine, insofar as
The human brain does one thing only (Score:2)
More specifically, the body sensors ask questions to the brain, and the brain searches its database of experiences to find the experience which maximizes survival in the current situation. Once the experience is found, it is activated and answers are sent to the sensors.
The above mechanism has been developed because mathematical logic can not prove that a situation is dangerous for an animal or not. For example, it can not be proved that facing a lion is
Re: (Score:2)
The error in your idea is that a serial computer has absolutely zero problem 100% accurately modeling a parallel computer; a real parallel machine is probably much faster (that's the point, of course, so you'd hope so) but it won't get any different answer to a problem than the serial computer modeling the parallel one. That's what is being said here - there is no indication that there are any local, small systems of any kind that cannot be modeled on a serial computer; hence, there is no scientific reason
Re: (Score:2)
What Goedel showed is that, given any sufficiently complex set of axioms for a system, there will be statements within that system which can be neither proved nor disproved; famously "This sentence is false." This is profoundly different from saying there are statements which cannot be proven; it's not that we can't find the answer, it's that we can show that there is no r
Re: (Score:2, Insightful)
Re: (Score:2)
Just because we do something one way, doesn't mean that (a) that's the way it has to be done, (b) that's the best way to do it in general, (c) that's the best way for another architecture to approach it, or (d) it would be cost-effective to even try to do it the way we do it at any given point in time.
Also, perhaps an image of an entire tank can be recognized entirely, just as easily as a sloping glaci
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
And type 11?
Re: (Score:2)
Re: (Score:2)
Most everything is made up for basic shapes. Some of those shapes may only be described by mathematics like for example, circles, triangles, squares etc. Now you just have to save how those shapes fit together to make a bigger shape but a shape in 3 dimensions.
This is not true. If you buy the theory that humans originated in Africa, and developed our minds in order to survive on the savannah, then seeing circles, triangles, and squares is a useless ability.
Imagine this picture: The sky, with two different cloud types merging into one another. Below that, the tree-line, with 200 different species of trees in it. Below that, the underbrush, with 1,000 different plant speicies in it. Below that, winding river. This is the type of image that the human mind has to
Re: (Score:2)
And of our 50,000 year or so history, only during the last hundred (or fifty, some might argue), did we have even moderately useful medical care, useful electronics, and general purpose computers. I wouldn't assume that very large, very fast steps might not be a perfectly reasonable consequence of the development of the very first real crack in the AI problem.
Then perhaps you can send your robot after that lobster you crave. :-)
Re: (Score:2)
Also, just FYI, this is wrong. Iron pyrite (among other things) crystallizes in cubic form, which contains squares. Other minerals found on the surface demonstrate other geometric forms on various crystal faces - triangles, pentagons (regular and non-regular) and so forth. Nature FTW when it comes to demonstrating geometric figures, I'm afraid. Even in 3D. :-)
Re: (Score:2)
Re: (Score:2)
If you buy the theory that humans originated in Africa, and developed our minds in order to survive on the savannah, then seeing circles, triangles, and squares is a useless ability.
OTOH, lines are absolutely vital.
All of those species of trees you mentioned are practically identical: two roughly vertical lines (possibly curving) close together that have a contrast and texture change, compared with the surrounding. The underbrush is similar: 1000 species, all a green, unrecognizable mass except for brightly coloured berries and flowers, which stand out purely because of contrast difference and colour. The sky is what you find above the horizon, no need to analyze clouds to find tha
Re: (Score:2)
No tree looks like a triangle. From any angle, they have a cloudy, amorphous, fractal structure. There aren't any 'points' in the shadow, nor even straight lines. You can't understand clouds using classical geometry. Yo
Re: (Score:3, Insightful)
Re: (Score:2)
They certainly can be, if they are solved by a general problem solving engine, but they don't have to be. A rubber band driving two rollers attached to a deflating balloon can perform the task of vacuuming a portion of your floor. A Roomba, certainly not intelligent, can do it considerably better in many, perhaps even all, cases. But it still isn't intelligent. I'm suggesting a
Re: (Score:2)
This is an important distinction for a house robot to make. I, for one, would not want my robot trying to take my azalea for a walk, or trying to prune my terrier. :-)
pr0n (Score:5, Insightful)
Re: (Score:2)
Man, I don't know about you, but I gotta work on my drawing skills...
On that note, female models interested in expanding their nude modeling portfolios please email me.
Re: (Score:3, Insightful)
More broadly, if a search engine were able to find similar pictures, then you could narrow down to the result you wanted by submitting images that are close to what you want. For instance you may have found a t
Re: (Score:3, Funny)
Really though, suppose you don't have any images of two chicks riding a wookiee in a gladiator outfit. And say you know there's one out there. Well, I'll tell you, Alex Ross has a much better chance of finding that image with his mad drawing skillz. Of course, once he completes his "query," he's made himself the image he was looking for. So I guess it's kind of pointless. I forget where I was going with this anyway.
Airplane Porn? (Score:2)
NWS http://ft.mirror.waffleimages.com/files/e1/e16518
Nuthin but a B-tree of eigenvalues (Score:2, Troll)
Hash table? (Score:2)
Will Google copy or buy this technology? (Score:3, Interesting)
Robots (Score:2, Funny)
"Kill multi button gadgets! Steve Jobs robot army angry!"
Re: (Score:2)
T
similar to Video Google? (Score:2, Informative)
Using these words, search engine
Document images (Score:3, Interesting)
I frequently have to create large collections of images from all sorts of file types -- some text-based, some graphics -- that get housed in a collection of images for easy, standardized review. If there were something that could avoid the step of extracting text from them, or later OCRing them and still end up with a searchable image collection, well, that would be exceedingly cool. It would cut the initial time outlay I have to devote to virtually any given project I have to deal with by 25 to 50%.
Re: (Score:2)
Have yo
Re: (Score:2)
Something like this could certainly be another trick for the proverbial bag. The one area where something like this would fail is when it comes to concept-based searching, where you'd pretty much have to have the text to feed an algorithm. For instance, "in the dog house," and "in big trouble" have similar meanings to a (US) human, but a garden variety
Re: (Score:2)
Not as useful as it sounds... (Score:2)
For example: I want to find more cat images. I feed it a picture of a white cat. I am more likely to be returned results of white dogs than, say, tabby or black cats.
Unless I'm misunderstanding something?
Re: (Score:2, Interesting)
Re: (Score:2)
This allows you to select the white kitten from the rest. If this technology can't tell a kitten from a puppy, it is pretty useless anyway.
Re: (Score:3, Interesting)
It seems it would be straightforward to implement something analogous to Google Sets [google.com], where you could supply a few photographs of what you're interested in (say, several cat pictures of various colors, or several white-colored pets). It could then learn which of the features were relevant, and add weigh to those in its search.
Since when can an ancient indian tribe ... (Score:5, Funny)
Hmm, sounds familiar... (Score:2)
Back in the day (almost 2 decades ago), I was using video rather than still images (which allowed me temporal information as well as spatial information) but I recently wrote a simple application to just use the spatial information to find me images "most-like" a source one. The original goal was to train the system and then try to leverage a semantic pr
Desktop search with image pattern recognition AI (Score:2)
Saw it done 10 years ago (Score:3, Interesting)
Visual Search? (Score:2)
Older Brother: Found him! He's behind the sofa.
*RING!*
Mother: Hello?
Voice On The Other End Of The Line: Ma'am, this is Pubert Skewya. I'm a lawyer for Duey, Cheatham, and Howe. We represent Hitachi.
Mother: Uhm. Yeah? So what?
VOTOEOTL: Ma'am, we have a record that you just encouraged your son to violate our client's patent on visual searches. Natually, we'll settle out of court for one billion dollars, American. If you refuse, with the state of the economy as it is,
decades old (Score:2)
Speed issues (Score:2)
My own system does 13 million images in about a minute, but with enough RAM to fit the dataset in memory I can do 10-20 seconds.
I hope they're not just using a cluster to speed up access, that's a workable solution but it doesn't really help those of us who can't afford a dozen boxes to power their searcher.
Already done (Score:2)
Oh boy! (Score:2)
Re: (Score:2)
Re: (Score:2)