There is very interesting work being done at the MIT Medialab on context aware devices that sense how you are holding them and adjust their utility according to what they think you want to do.
Watch the video, it’s quite promising:
I would be interested in how the researchers arrived at what was considered standard ways of holding objects according to their use. When I had a Nikon FM years ago, one of my favorite ways of holding it was to cradle the lens in the upward turned palm of my left hand while my right was on the shutter release button and frame advance lever.
I would imagine there are any number of contact points for an objects and I guess that the next step would be not a predefined set of contact points but for a customized set that would be learned by the device from a hisory of past use.
Also, having the object give an auditory response to the way the device is held “Camera”… “Phone” etc. would be great as well… otherwise we are changing our hands to view what the LCD is telling us about the mode the device is in.
Kudos to MIT MediaLab… This is very interesting stuff indeed and worth every bit of research we can give over to the subject.