Augmented Reality in Product Development: A Neuroscience Perspective
Product development is a collaborative process in which the product evolves from an idea, to drawings and ultimately to a physical prototype. This is an iterative process in which two-dimensional (2D) static images and schematics drive development early in the process only later leading to the development of a physical 3-dimensional (3D) prototype. This approach places a heavy load on the cognitive system in the brain because 3D dynamic representations and imagery must be constructed in the brain from a series of 2D static images.
From a cognitive neuroscience perspective, constructing a 3D dynamic representation from a series of 2D static images requires a huge amount of cognitive effort. First, you have to hold a mental representation of a series of 2D static images in your short-term (working) memory. Second, you have to hold these 2D static mental representations in your working memory and combine them on the fly to construct an accurate 3D static representation. Finally, you have to infer and impart the dynamic nature of the product onto this 3D static representation. Every step in this process is effortful and represents a chance to fail.
The cognitive neuroscience research is clear that each of these steps requires an enormous amount of cognitive capacity (in the form of working memory) as well as cognitive energy (in the form of executive attention). Any time working memory load and executive attentional demands are taxed, one is more likely to make an error and generate an inferior mental representation. Because this cognitive translation process is error-prone, it is impossible to know if each member of the team has the same product “visualized” in their head. Only once a physical prototype is developed and discussed can one be sure that all team members are seeing the same product. By starting the product development process with 2D images that place a heavy load on the cognitive and imagery systems, and only later building a physical 3D prototype, the product development process is suboptimal and inefficient.
A Neuroscience Perspective
To optimize and increase the efficiency of product development, other processing systems in the brain should be engaged from the start. First, and foremost, one should engage the visualization systems in the occipital lobe of the brain with 3D dynamic product representations from the start of the product development process. These virtual 3D dynamic structures should be mobile so that the product development team can view them from any direction, and they should have the capability of expansion and contraction so that the team can view them for inside or out. The whole team should be presented with the same 3D dynamic representation so that pros and cons can be discussed with respect to a single fixed representation.
Second, one should be able to engage the product representation behaviorally. If the product is meant to serve some purpose for individuals through physical interaction, then those behavioral interactions should be tested during the product development process. This will allow the team to address questions about the human factors of the product and the naturalness of the interaction. By allowing the development team to interact behaviorally with the product, behavioral systems in the brain, such as the striatum, will be engaged.
Augmented reality (AR) tools offer significant promise for efficient product development because they engage cognitive, behavioral and visualization brain systems in synchrony, instead of relying exclusively on cognitive processing systems. Imagine the product development team discussing the merits of the product from the start while viewing a 3D dynamic “virtual” product with a hand-held AR tablet or hands-free AR glasses. Team members all view the same product with the ability to “hold” it in their hands and manipulate it, all while discussing specific design features that they like or dislike. This approach reduces the need for cognitive translation and imagery processing in the brain, and instead engages the visual object representation regions in synchrony with behavioral and cognitive processing regions. In a nutshell, it works the way our brains work, creating an optimal environment for development with reduced likelihood of error.
AR tools come in many forms and budget ranges. If the most important aspect of product development is to have a shared visual representation, then hand-held devices should suffice. These are more budget-friendly because one’s current mobile device or tablet can be utilized, with only the need for installation of AR software.
If behavioral interaction is essential, then hands-free devices are the way to go. A number of these exist on the market that come with a range of options and price ranges.