Follow Slashdot blog updates by subscribing to our blog RSS feed


Forgot your password?
Displays Graphics Software Technology

Pixel Inventor Goes Back To the Drawing Board 304

lawpoop writes "Russell Kirsch, inventor of the square pixel, goes back to the drawing board. In the 1950s, he was part of a team that developed the square pixel. '"Squares was the logical thing to do," Kirsch says. "Of course, the logical thing was not the only possibility but we used squares. It was something very foolish that everyone in the world has been suffering from ever since.' Now retired and living in Portland, Oregon, Kirsch recently set out to make amends. Inspired by the mosaic builders of antiquity who constructed scenes of stunning detail with bits of tile, Kirsch has written a program that turns the chunky, clunky squares of a digital image into a smoother picture made of variably shaped pixels.'"
This discussion has been archived. No new comments can be posted.

Pixel Inventor Goes Back To the Drawing Board

Comments Filter:
  • by Anonymous Coward on Wednesday July 07, 2010 @03:20PM (#32830058)

    Here's a relevant article about it:

  • by tepples ( 727027 ) <tepples@gmail.BOHRcom minus physicist> on Wednesday July 07, 2010 @03:24PM (#32830126) Homepage Journal

    Now who wants to write a rasterizer for non-rectangular pixels

    From the article: The pixels are still square; they're just cut into two pieces along a line through the pixel, and each piece has a color. (It sort of reminds me of S3TC.) The edge of a polygon would have one piece for the front and one for the back, and any other points along it would have one piece for each of two texture samples.

  • Analogue vs Digital (Score:4, Informative)

    by i_ate_god ( 899684 ) on Wednesday July 07, 2010 @03:30PM (#32830208)

    This sounds like the ongoing debate between analog and digital audio. Everyone likes using images like these [] during the debate, but given enough resolution (bits), the closer the digital audio will be to its original analogue (electrical) source.

  • Re:Wait... (Score:2, Informative)

    by Anonymous Coward on Wednesday July 07, 2010 @03:31PM (#32830222)

    JPEG is discrete cosine transform-based. You mean JPEG 2000, which is entirely different despite the similar name. Anyway, it seems Mr. Kirsch is 10 years or so late.

  • by gstoddart ( 321705 ) on Wednesday July 07, 2010 @03:31PM (#32830224) Homepage

    No, no... They didn't have color in the 1940s. Just look at the movies from back then...

    Actually, you jest, but I remember the first time I saw footage from WWII that was in colour and being stunned, because it was so vivid.

    And, then there was the Russian guy [] who created colour photos in 1909 using techniques he created himself.

    There are more things in heaven and earth, Horatio, Than are dreamt of in your philosophy.

  • Re:Suffering ? (Score:0, Informative)

    by Anonymous Coward on Wednesday July 07, 2010 @03:48PM (#32830462)

    From looking at the two pictures, however, I'm thinking the guy just might have discovered a new way of compressing images.

  • by Lord Lode ( 1290856 ) on Wednesday July 07, 2010 @03:51PM (#32830508)

    1) A pixel isn't "invented" by anyone. A pixel is just a concept that is so straightforward, like the wheel, language and adding numbers. It's not a question of which single person "invented" it. It's just a question of, once the technology is there, it WILL be used, no matter what.

    2) What kind of screen are you going to use for that? Each pixel can have different types of pixel sizes so no screen could fit that. A square grid is the most uniform division of 2D space into units.

    3) If this would have been about hexagonal pixels, I'd have found this cool.

    4) At best, this is a new compression scheme for storing pictures - but certainly not a way to display them (see 2))

    5) Non square pixels are not a new idea, see for example sensors of cameras.

  • Re:Hmmm... (Score:3, Informative)

    by msauve ( 701917 ) on Wednesday July 07, 2010 @03:55PM (#32830560)

    Ummm ... the guy created the first digital image, on the "only programmable computer" in the US at the time. I would say yes, that's an invention.

    From "A Brief History of the Pixel []":

    Paul Nipkow had filed a German patent application on his mechanical-scanning TV or Elektrisches Teleskop1 in 1884, in which he referred to Bildpunkte--literally picture points but now universally translated as pixels...Alfred Dinsdale had written the very first English book on Television in 1926, but instead of picture element he had used there lots of other colorful language: a mosaic of selenium cells, a great number of small parts, thousands of little squares...

    There's much more, but it suffices to say that Russell Kirsch is only a minor footnote in terms of the history of the pixel. He may have invented something, but it wasn't square pixels. No doubt, someone colored in blocks on a sheet of graph paper to make an image before pixels were ever used in conjunction with an electronic device. And using square pixels on a computer connected raster scanned display is just common sense, not an invention - it makes the math simpler.

  • Re:Suffering ? (Score:4, Informative)

    by Animaether ( 411575 ) on Wednesday July 07, 2010 @04:01PM (#32830672) Journal

    blatant as it may be, I read the article three times now - and Soilworker, you did well not to bother. I'm pretty sure the answer is not in there.

    This doesn't seem to be about square pixels in terms of display technology (where hexagonal pixels may indeed be superior).
    It also doesn't seem to be about picture acquisition.
    On the face of it, it seems to be talking about mapping rudimentary shapes to pixels so that they conform to a most-likely contrast-matching scenario with regard to surrounding pixels. Which some other posters here already pointed out with posts about JPEG and the like - but it's not really comparable to that either. Not in technique and not in performance.

    At best, as far as I can take away from it, it could be a different way to display an image when zoomed in / a technique that could be used when enlarging an image to provide greater apparent detail (although you wouldn't want to enlarge it - you'd want to store the masks found with the original image for display).

    The results in the news blurb look pretty decent and if nothing else 'different' from other 'smart scaling' methods, so it's worth exploring. But what this has to do with square pixels as we're mostly familiar with them, I have no idea.

    Now, about those hexagonal display pixels...

  • by hivebrain ( 846240 ) on Wednesday July 07, 2010 @04:33PM (#32831166)
    my favorite C&H ever.
  • Exceedingly silly (Score:5, Informative)

    by Virak ( 897071 ) on Wednesday July 07, 2010 @05:02PM (#32831730) Homepage

    First, here's the actual paper [], since it clarifies what exactly he's suggesting and doesn't seem to be linked anywhere in the article.

    It's not a suggestion that we start using non-square pixels for displays or cameras or scanners or what not, though he's certainly not being very clear about anything and the reporting on this is just making matters worse. What the paper proposes is a method where:
    1) The image is split into 6x6 blocks
    2) For each block, you go over the four rotations of the two following two-section masks:
    The triangular mask:
    The rectangular(ish) mask:
    for a total of eight effective masks, and average the values under each section, resulting in two values, A and B.
    3) For the mask and rotation that has the largest difference between A and B, you output the mask, the rotation, and the A and B values, resulting in 19 bits from a 6x6 (288 bits) block.

    Though he talks of non-square pixels and whatnot, it's really just a compression algorithm. A really stupid one. Basically it's a bad variation of vector quantization, with lots of baffling details. Why 6x6 blocks? Why those specific masks? Why are you maximizing contrast instead of minimizing error like any sane person would do, WHY? There's no rationale given for any of these choices, not theoretical, not empirical, not even subjective.

    The same sort of rigor extends to his comparison, where he compares his compression algorithm to, instead of, say, another compression algorithm, the image apparently simply downscaled and then scaled back up. And not even with a halfway decent resampling algorithm, but with nearest neighbour. Not to mention that the "non-square pixels" version has 2.375 times as many bits to work with. If he'd done a comparison to a reasonably modern compression algorithm like JPEG, the results would be much less favorable to him.

    tl;dr Some old guy put together his My First Compression Algorithm kit and it's being treated like a revolution in graphics by ignorant reporters. Nothing to see here, move along.

  • Re:Huh? (Score:3, Informative)

    by dfghjk ( 711126 ) on Wednesday July 07, 2010 @05:14PM (#32831948)

    Who says a display has to be raster-based? The market did. If you knew your history, you wouldn't ask that question.

  • Re:Huh? (Score:3, Informative)

    by dsparil ( 844576 ) on Wednesday July 07, 2010 @05:22PM (#32832076)
    Pixel was completely misused in the article. He's working an image scaling [] algorithm for photos. That isn't saying that it's not noteworthy, interesting or important; it looks like it works great and I'm not aware of anything that produces results that good on photos. There is the Hqx [] family of filters, but those were designed for emulators and aren't meant to be used with more than 256 colors.
  • Re:Huh? (Score:3, Informative)

    by commodore64_love ( 1445365 ) on Wednesday July 07, 2010 @05:30PM (#32832202) Journal

    (1) If you make the pixels sufficiently small, nobody will notice they are square or triangle or whatever because people won't be able to see anything but a bight point of light.

    (2) Not all pixels are square.

    Those used for TV-compatible computers like Atari 800, Commodore=64, or Amiga used rectangular pixels (more tall than wide) because of the analog NTSC standard (which doesn't use pixels at all but is approximately 704x486 analog resolution). These computers produce rectangular output to be consistent with that. DVDs also use non-square pixels for the same reason.

  • Re:Suffering ? (Score:2, Informative)

    by flowwolf ( 1824892 ) on Wednesday July 07, 2010 @05:32PM (#32832226)
    There are plenty of screens in consumer devices this very day that only give an effective pixel measurement. Many times they are actually made up of tiny dots or rectangles layered offset from each other. A cluster of 3 or 4 different colored dots could be considered 1 pixel. The commonly used method of sub pixel rendering for font smoothing uses this sub structure of a pixel to produce a better edge.
    If this guy's format of storing color information takes off, we could use the data within his files to create a better image across the substructure of a screen. I don't see what the problem is. With the proper software, photographs could be rendered better on almost any modern LCD by using the substructure of a screen pixel combined with his variably shaped pixel format.
    The improvement may not be all that great, but new screen technologies are using effective pixel measurements more and more. We could see benefits on todays technology and lay the software ground work for display manufacturers to stop cramming their technology into some square box which can only ever be an effective measurement.
  • Re:Huh? (Score:3, Informative)

    by vrt3 ( 62368 ) on Wednesday July 07, 2010 @05:50PM (#32832508) Homepage

    As I understand it, he proposes a system where each pixel (meaning in the image format, not on the physical display) can be subdivided in two areas, with different possible shapes (two rectangles on top of each other, two rectangles next to each other, two triangles) and different sizes of the two shapes. The best way to subdivide is decided for each pixel, in a way that maximizes the contrast between the two areas.

    Or something like that; the text doesn't make it very clear.

  • Re:Huh? (Score:4, Informative)

    by parlancex ( 1322105 ) on Wednesday July 07, 2010 @10:28PM (#32834832)

    Actually, anti-aliasing is nothing like blurring. True anti-aliasing is actually a projection of a higher sample rate to a lower one by combining more than one sample within the area of a single sample at the lower sample rate. While not as accurate as the higher resolution image, it is significantly more accurate than simply selecting one sample from each area. Blurring would be taking selecting one sample within the area of a single sample at the lower rate, and then averaging neighboring samples, which means you actually end up with less information than the un-blurred un-anti-aliased image.

Experience varies directly with equipment ruined.