A smart use of Moore’s Law
(Visited 7733 times)In the past I have written and spoken about what I called “Moore’s Wall,” which could be summarized as the notion that expanding computing capabilities give us higher bars to reach which then result in higher costs and development times, and not actually better products.
Well, Toshiba just announced a TV at CES that circumvents this in a clever way. The TV has a Cell chip in it, which makes it outrageously powerful for a TV. So powerful that it can in fact do silly things with the extra processing power, such as interpolate frames, or do special video effects.
Or render the image twice at full speed, so that it can turn any signal into a 3d image.
In effect, this means that the problem in Moore’s Wall is sort of circumvented to a degree; instead of upping the caliber of the content needed, it just uses the computing power to transform the content we already have.
I like this notion, in part because it has a lot in common with notions about standard formats and the like. But it also makes me propose a parlor game: what would <insert device here> be like with insane computing power but no changes to the rest of the technology? We have started to see glimmers of that with the way in which phones and iPods have been changing, of course, and the idea of networked fridges that detect spoiling food has been out there forever… but I am wondering about things like this, which seem to magically upgrade everything we already had.
13 Responses to “A smart use of Moore’s Law”
Sorry, the comment form is closed at this time.
This is a very interesting notion. The fact is that most innovations in areas where they are really needed come from the entertainment industry because of the superior way of offsetting the R&D costs. In a previous life, I worked in embedded systems, where we commonly dealt with the idea of making portable embedded devices do more things. Sadly, the majority of those projects were esoteric parts of much larger systems and so you wouldn’t know about them (for example, do you know that the average automobile has about 130 processors onboard distributed throughout different systems? – yet this computing power is effectively transparent when the user interacts with the car, they get a better experience, yet do not know exactly why).
Making common devices imbued with computing “super powers” is an interesting idea. Toys could certainly benefit from this, autonomous race cars and intelligent pets are obvious uses, but what about headphones that know what you like to listen to based on surrounding conditions? rainy afternoon? your playlist starts playing Nora Jones…baseball bats that toughen and lighten based on piezoelectric frames are possible, this has already been done for tennis raquets. Chairs that conform to optimal comfort for the person based on sensors that measure the stress on different parts of their body, I have heard of “food robots” in Japan that dispense different amounts of ingredients in a cooking program. Combining different functions that are highly specialized in a networked environment seems like it could be a boon for cleanup crews, cooks, gardening, etc. Lots of interesting ideas there.
I recommend a book called “When Things Start To Think” by Neil Gershenfeld as a great starting point. Its a small book, but it has some interesting ideas.
I clicked through to the story you linked to, and it’s still not clear. Is this thing trying to analyze totally 2D images and “decide” which parts should be made to appear closer/farther and by how much? I could see it gaining a little of this information by analyzing parts of the image that move around from frame to frame, but not enough to keep it from being far inferior to showing image that was captured/generated in 3D originally. Whether from a dual camera, or 3D rendered animation.
There have been 3D movies made in the past by taking movies shot in 2D with a single camera, and having artists manually select different parts of the image to bring forwards or backwards. A nice gimmick, but still clearly inferior to things that were shot totally in 3D. Apart from rare exceptions where they worked really hard, each object, person, etc. looks like a flat piece of paper with the image on it, that was moved forwards or backwards. A person’s nose doesn’t stick out closer to you than the rest of their face, etc. A smart algorithm might catch some of those internal details of things that move in a video, depending on how much each detail tends towards perpendicular vs. parallel to the plane of the TV. While motionless objects in the scene I’d think it couldn’t do much with at all, save to place them closer/further than the moving objects based on whether those objects appear to move in front of or behind them.
I’m tending to label this idea “flash in the pan gimmick” rather than “long term successful core technology”. Though it’s a cute idea.
A perfect example of this was the Neo Geo. I can’t hunt down the specs right now, but they took the same processors everyone else had at the time and tacked on like 4x the memory. It’s why Samurai Showdown looked so much more amazing than Street Fighter 2 at the time. Or why Metal Slug looked so much better than Contra. You had 4x the stuff moving around the screen due to memory. It made backgrounds look more alive, explosions bigger, bosses bigger, more enemies on screen, the whole 9 yards.
If this leads to RealD 3D movies being available in 3D on Blu-Ray, then I’m all for it.
tbowl, that is exactly the opposite of what I meant. 🙂 That’s making the content creation task harder and harder and more expensive.
Here’s another example of this idea:
http://venturebeat.com/2010/01/06/casios-digital-art-frame-converts-photos-into-works-of-art/
A photo display frame that does Photoshop-style filters on your photos. And has face recognition so you can add a smile to a portrait after the fact with a single button.
The idea of newly increased bandwidth (or other resource) being initially used for kinds of transformation is as old as bandwidth. The classic example I’ve heard of over the years is the rail system in the US. It was originally built to haul goods. The idea of a passenger service for long distance trains was, economically, not a great idea at the time. Why? Nobody wanted to go long distances that fast. Sure, the trains added some passenger cars and, initially, charged exorbitant prices. Because, of course, anybody who wanted to travel far/fast *had* to be rich, so… after the rails had been out for a few years, though, the railroads realized, “Hey, we can add more passenger cars with almost no increase in cost.” So they did. And people used them. Which meant more moved out West, which meant more transfer of goods was needed, etc. etc.
People also originally questioned the need for phones in individual houses. Sure… one for the block, but seriously… who are you going to call? Same for cell phones. Police, firemen, construction workers, traveling salesmen… who else needs one? But once the capability is there, creative uses follow.
Same for text messaging, btw… I was in the cellular industry when it started, and it was originally a cheap text-pager alternative for folks who couldn’t afford cell phones. Now? Paging is a huge revenue cow for the networks, as they charge insane prices for delivering a thimble’s worth of data. But people want/need texting. On their phones. Which are mobile. And now have little computers (I love my Droid!).
TV can do computer stuff? Cool. Can it time-shift the volume a fraction of time that I won’t notice in order to make the volume more level? I hate movies where the “booms” are huge-loud, and then I can’t hear the conversations. I want the volume to be relatively stable. Maybe a nice computer could help me with that…
But it does make a good thought experiment. What would [blank] be like if you added a crazy-good computer to it? Neat idea.
The video filters and interpolation and everything are great, and when you look at the emulation scene for older games, like nes and snes generation stuff, you see this constantly. Scale filters, edge smoothing, etc. You could make the argument that things like anti-aliasing and anisotropic filtering were designed specifically for this reason as well. This is just moving it from the processing system to the display.
But can anyone tell me how well converting an inherently 2d scene into 3d works? I haven’t seen it, but I have doubts about it being particularly effective without having been captured that way to begin with. That being said, I do want to see stuff like Coraline and Avatar get *proper* home releases. I’m just not sure the 3d stuff is going to be particularly useful in terms of upgrading old content.
Let’s see how well Cameron’s conversion of Titanic to 3D works.
There was a trend some time ago when amps went solid state to add more and more features. At some point, all those features came off the amp and into the preprocessors (say effects) and the Twins came out of the closets and back onstage despite the high costs of 6L6s and rewiring the power supply. Controller keyboards replaced the very awkward synths, yet more processing power was hidden in the modules to enable a simpler interface (the controller) over more functionality (the synth banks). The point was that while lots of features sell in the show room, they make the show harder to perform and ultimately, the users will simplify. Mercedes, for example, are known for having very simple dashboards.
So perhaps there is a contra-game here: how many devices do you use in daily routines that could benefit from having processors taken out of them?
len, is he doing that conversion by hand, or by algorithm? There’s a difference, I would think.
How transformative can you get? If you take a low-vertex model from ten years ago and render it in modern crisp stereoscopic 3D, what you get is a modern crisp view of a blocky model wrapped in a low-rez texture. The hardware might be sophisticated enough to smooth the rough edges, but that texture is still going to look pretty craptastic.
If it is feasible, it might bode well for ‘realistic’ graphics, which tend to age less well than stylized graphics as our standards of realism go up. But I question the practicality of magic hardware to deliver a fix. 3D from 2D may prove to be easier (from a hardware perspective) than new 3D from old 3D.
Hope I’m wrong. There’s a lot of cool older stuff that might captivate a new generation if it can get a cheap facelift.
Forget 3D – picture On the fly colorization.
Then, of course, the advertisors will get ahold of it and make little “ZING” special effects on all of their product placements with fly out popups!
Add a computer to a clock. Now you’ve got a clock that can yell at you when you are late after it detects that you are still at home instead of on the road to work.
add a computer to a doorknob. You cannot open the door even if you have the key.
Your shoes can now tell you how much weight you gained today.
Your clothes can detect the temp and pressure and calculate the weather and then adjust itself to make you warmer or colder.
Computers are wonderful!