In a previous post I mentioned that the augmented reality glasses will need an object detection and classification feature. I’d like to explore that a bit more. Currently object oriented programming deals largely with highly abstract “objects” that, despite the examples in programming textbooks, rarely correspond to real world objects.
Augmented reality will change that. Augmented reality is the merging of the real and the digital. When I see a real world object, the glasses should show me a set of digital functions that I can perform on that object. Previously I gave the example of a book, and described what kinds of information the glasses could present. To do that the glasses must be able to identify real world objects.
Once that identification has taken place, the glasses can present the user with a set of functions appropriate for that object. That set of functions could be what a program looks like in the new paradigm. Not applications but a set of functions that operate on real world objects. In other words real object oriented programming. For instance, not an astronomy application like we have now, instead it would be a set of functions that could be activated when one was looking at the sky.
The glasses would determine that you are looking at the sky, check its internal registry to see what functions are available for that type of object, and somehow present the user with the option of enabling those functions.
Not all augmented reality functions would work like this of course. Some may be needed independently of a real world object. Still I think this new type of OOP will play a dominant role in future augmented reality systems.
Categories: Augmented Reality