Wednesday, April 09, 2008

The next great PC advance.....

Computers are boring. People only buy PCs because they (a) have to because of a malfunction or (b) have to because of a game or operating system. When people only buy a product because they "have to" the product line is in trouble. In fact, PCs are taking a back seat to things like mobile phones in countries like Japan.

What is missing is the "wow factor". We need a new PC that engages us in ways never seen before. Heck, we WANT a new PC that engages us in ways never seen before.

Case in point....talking PCs. Why can't we talk to our PCs and have them talk back to us?
The software for such things has been around for years. It's included in XP. And, I'm sure you've heard of Dragon Naturally Speaking (by Nuance). It's a neat product, but it has its limitations.

For example, DNS seems to always type "cocaine" whenever I say "ok". It also becomes worse and worse at detecting my voice properly as the day wears on and my voice becomes strained due to dictating a large manuscript.

Now, I know that voice processing takes a lot of processing power. Just try out the voice processing that comes built in to XP and you'll see what I mean.

But, why has it stopped there? Long ago we realized the fun and functionality of high resolution graphics. Unfortunately, the processing of high resolution graphics was a burden on the PCs processor and ate up large amounts of RAM, slowing the machine and degrading the performance.

The answer? A GPU (graphics processing unit) on it's own card, with its own RAM. We even designed special slots on the motherboard to accommodate the massive flow of data to and from these graphics cards, and we developed special chips and programming devoted to the processing of graphics to speed the response time of these cards and of the applications that they would spawn.

Now we can play games like Crysis (some say its the most visually stunning game ever produced) and even make movies and 3D animations right on our PCs. None of this would have ever been possible without the graphics card, with its special CPU, RAM and programming languages and tools.

So, why don't we have a Voice Processing Unit? Why hasn't someone created a Voice Processing Card that has a chip dedicated to processing the human voice, with its own RAM, programming language and tools? And why don't I have it in my PC already?

The same question goes for the AI that would make the Voice Processing Card such a treat. Both voice processsing and artificial intelligence are very CPU and RAM intensive (AI even more than vice processing). So, move them to their own cards and let's start making PCs that we can use as virtual assistants.

So, that was the idea behind this letter that I sent to AMD President and CEO Dirk Mayer's assistant, Jan Boswell, on 9/09/2007....

Jan,


A group of us (software engineers and business networking consultants) were
tossing around some ideas of what would make the next great leap in PCs.
One long-promised idea that has simply never been realized was the idea of
actually being able to speak to your PC (like Dragon Naturally Speaking software
products allow now – in a limited fashion). While touching a PC (like the
HP touch screens) is neat, talking to it would be fantastic.


Imagine a PC screen over your workspace in the kitchen that you could speak to
for instructions, images and video on how to make a new dish for supper (without
having to touch anything but the food) – or a screen mounted on a mechanic’s
toolbox that could talk him/her through a car’s diagnostic test or repair – or
even an in-car PC that a driver could talk to for directions or emails without
having to take her hands off the wheel or eyes off the road (like OnStar for
every vehicle – not just GM) – or a surgeon being able to call up info on a
screen (like patient x-rays, MRIs, etc.) via voice alone that may help
save a life. The advantages of touch-free (or voice) computing over
traditional touch-computing are far too many to list here.


We chatted about why this has never been actualized and feel that a huge problem
is the amount of system resources needed (CPU mostly) has been a limiting factor
in the actualization of real-time voice control of your PC. But, with the
advent of multi-core chips (and specifically the integration of graphics
processing on AMD’s chips) we were wondering if voice processing integration (by
utilizing a core optimized for voice processing) might be the answer.


A processor built into the core that was optimized for voice recognition (much
the same way a gpu is optimized for graphics) would provide much better
voice recognition capabilities than are currently possible, would not slow other
processes on other cores and just might make talking to your PC (and it talking
back) a reality. The AI that would be needed for true conversation with
your PC may also need its own optimized core, but that could come after the core
ability to speak to your PC and have it carry out your commands in real
time.


Imagine a PC that you could talk to, and it could talk back. Then imagine
that the PC’s main voice is the same as that of the Star Trek series’
computer. Selling this feature would be like shooting fish in a barrel (us
geeks are suckers for anything Star Trek). The usefulness and novelty
would catapult AMD to the forefront of chip sales without even breaking a
sweat. Add to that the multitude of areas that touch-free computing can
help to advance – like the OnStar-like capabilities of a car PC, the voice
control and integration of the home and the life-saving abilities of an
operating room voice PC and we think you’d have a winner that Intel would have a
hard time matching.


And, making it easily programmable by hobbyist programmers (as well as
professional programmers) would ensure that the voice technology found its way
into applications of all types.


Patenting the integrated voice processing may even give you a lead that Intel
would find it hard to overcome.


From what we have seen, Dirk is definitely an innovative person that is highly
motivated to place AMD in a position to dominate the chip marketplace.
And, although not a particularly technical observation, we feel that he could
see the value in such a technology – both from marketing and technical
advancement points of view.


Thanks for getting this to the proper people! Have a great day!

Jim Hubbard


Well, a couple of days went by and I emailed Jan again to find out what Dirk thought of the idea. Jan did not respond to my email.

I emailed her again. Again, no response. In fact, I emailed her for a solid week, every day, only to be ignored.

Now, I don't think that I am so important that she *must* return my emails, but it would be nice to know that something that we suggested was given some actual thought. (Especially seeing as how it is nearly impossible to offer suggestions on the websites of AMD and Intel and it took hours just to locate Jan to send her these suggestions.)

Well, not being one to give up easily, I sent a similar letter (after 2-3 hours on the phone to find someone to send it to - failing to find a simple "wish list" or "suggestion box" link on their website) to Intel's Sue Colla.

Sue,


Thanks for taking the time to talk with me about my ideas for a new multi-core
chip. The following is basically the same letter that was sent to AMD
about this subject. I really don’t care who does it first, I just would
like to see the idea enacted if it is practical to do so.


“A group of us (software engineers and business networking consultants) were
tossing around some ideas of what would make the next great leap in PCs.
One long-promised idea that has simply never been realized was the idea of
actually being able to speak to your PC (like Dragon Naturally Speaking software
products allow now – in a limited fashion). While touching a PC (like the
HP touch screens) is neat, talking to it would be fantastic.


Imagine a PC screen over your workspace in the kitchen that you could speak to
for instructions, images and video on how to make a new dish for supper (without
having to touch anything but the food) – or a screen mounted on a mechanic’s
toolbox that could talk him/her through a car’s diagnostic test or repair – or
even an in-car PC that a driver could talk to for directions or emails without
having to take her hands off the wheel or eyes off the road (like OnStar for
every vehicle – not just GM) – or a surgeon being able to call up info on a
screen (like patient x-rays, MRIs, etc.) via voice alone that may help
save a life. The advantages of touch-free (or voice) computing over
traditional touch-computing are far too many to list here.


We chatted about why this has never been actualized and feel that a huge problem
is the amount of system resources needed (CPU mostly) has been a limiting factor
in the actualization of real-time voice control of your PC. But, with the
advent of multi-core chips (and specifically the integration of graphics
processing on AMD’s chips) we were wondering if voice processing integration (by
utilizing a core optimized for voice processing) might be the answer.


A processor built into the core that was optimized for voice recognition (much
the same way a gpu is optimized for graphics) would provide much better
voice recognition capabilities than are currently possible, would not slow other
processes on other cores and just might make talking to your PC (and it talking
back) a reality. The AI that would be needed for true conversation with
your PC may also need its own optimized core, but that could come after the core
ability to speak to your PC and have it carry out your commands in real
time.


Imagine a PC that you could talk to, and it could talk back. Then imagine
that the PC’s main voice is the same as that of the Star Trek series’
computer. Selling this feature would be like shooting fish in a barrel (us
geeks are suckers for anything Star Trek). The usefulness and novelty
would catapult the developing chip company to the forefront of chip sales
without even breaking a sweat. Add to that the multitude of areas that
touch-free computing can help to advance – like the OnStar-like capabilities of
a car PC, the voice control and integration of the home and the
life-saving abilities of an operating room voice PC and we think you’d have a
winner that anyone would have a hard time matching.


And, making it easily programmable by hobbyist programmers (as well as
professional programmers) would ensure that the voice technology found its way
into applications of all types. Hobbyist programmers are the key to any
new PC technology as it Is they who put out the most programs – incorporate the
newest technologies first and who recommend technologies to those that look at
them as ‘geeks.’”


That’s about it. The concept of talking to your PC is certainly not new but,
with the advances made possible by multi-core chips, it may now be within our
grasp. A dedicated Voice Processor may make voice command of your PC and
the many life changing (and life saving) benefits that it would bestow upon
society a reality.


Thanks so much for your time. If there is anything that I can do or
explain further (like more examples that I have thought of for its use) please
do not hesitate to call.

Jim Hubbard


True to form, I could get no emailed response from Intel either. But, I did receive a letter from Intel's attorney's that stated that they did not wish to license my idea at this time.

WHAT?! Where did I mention licensing or patents or any such thing? I didn't. So, I called the attorney at Intel to tell him that I wasn't selling a license, I was simply offering a suggestion and expressing a wish for a better PC that would offer a more personalized experience.

He then told me that Intel may be working on that already. I asked if he could find out and let me know and he said that, due to the size of the company, that would be impossible.

So...just how do we get the next great leap in PC technology to happen?

When can I speak to my PC? When will it speak back? When will it's fledgling AI recognize my voice (or face via the web cam) and greet me with "Good morning, Jim" and automatically tell me the day's weather, stock reports and other info that I ask it for on a daily basis - because it has learned what I ask for each day and now gets that info in real time over my broadband connection?

Why hasn't this happened already? Surely I am not alone in seeing that this technology can be realized now. Surely the great minds at Intel and AMD have not thought of this before.
So, when will my PC be more than just a box and become an integral part of my day - like a peronalized virtual assistant?

And, why doesn't AMD or Intel see the benefit in making this happen?

I'm just sayin'......

No comments: