The emulation theory of representation postulates that there are multiple internal modeling circuitries in the brain referred to as emulators. These emulators mimic the input-output patterns of many cognitive operations including action, perception, and imagery.[1][2] Often running in parallel, these emulators provide resultant feedback in the form of mock sensory signals of a motor command with less delay than sensors.[3] These forward models receive efference copies of input motor commands being sent to the body and the resulting output sensory signals. Emulators are continually updating so as to give the most accurate anticipatory signal following motor inputs.[1]
Little is known about the overall structure of emulators. It could operate like a search glossary with a very large associative memory of input-output sequences. Under this system, the emulator receives a motor command input, finds the closest matching input from its database, and then sends the associated output in that sequence. The other model is an articulated emulator. This model requires that for each significant sensor of the human musculoskeletal system there is a group of neurons with a parallel firing frequency within the emulator.[1][4] These groups of neurons would receive the same input as that being sent to their corresponding part of the musculoskeletal system. For example, when raising one's hand signals will be sent to neurons responsible for wrist, elbow, and shoulder angles and arm angular inertia.
Regardless of this structure both systems will grow and change over time. This is due to constant, fluctuating noise from the environment and the fact that the body changes over time. Growing limbs and muscles result in changes in both required input commands and the resulting output. This requires a degree of plasticity in the emulators. Emulators are thus continually updating, always receiving the resulting output from the musculoskeletal system from an inputted command and comparing it to its own output. It is likely that this is accomplished through Kalman Filter. Complete correction from the difference in outputs, however, is not applied to the emulators.
Noise is a constant, fluctuating variable affecting musculoskeletal sensory output. Thus, output differences are almost always greater than they actually should be. Consequently, an internal calculation is made; taking into account the relative reliability of the emulator's output and if the situational conditions tend to result in very variable or stable sensory outputs, the correction is weighted and applied.[1]
The location of these emulators have been found to be largely dependent on the motor system they provide processing and information for.
Emulators are helpful tools to the body. Sensory feedback naturally takes longer than a predetermined forward model that the brain can compute or access directly. This timing gap can be delayed further by other environmental or physical factors that often occur throughout life. Emulators allow a mock output signal that is often close to that of the real thing. It is further postulated that the emulator can be run independently thus allowing the brain see the likely outcome of a command without running it on the entire system. Furthermore, when returning sensory information has noise and\or holes, the emulator can be run in parallel, making corrections and filling in gaps.[5]
Experiments have found that the temporal length of the motor control loop takes approximately 250-400 ms.[6] Motor centers, however, are already making corrections to their previous motor plan about 70 ms after the onset of the movement based on peripheral information that should not have arrived yet.[7] It is thus likely that there are motor emulators that receive and process efferent copies and then send predictive peripheral output before the actual peripheral output from sensors is received. Motor centers often have to run very quickly. Feedback delays and peripheral errors make real-time, rapid motor movement very difficult and liable to failure. Through the use of emulators motor centers can receive fast and reliable feedback in the form of a priori predictions. The later real periphery output can then be incorporated into a more accurate model.[5]
It is possible that emulators are largely responsible for motor imagery. By running motor areas offline, emulators can receive hypothetical movement commands and provide the imagined likely sensory results. Much of this may be run without conscious awareness. For instance, when walking towards an object on the ground to grab it, the brain is running offline motor plans to determine the optimal hand positioning and grip strength to pick up the object.[5]
Visual modal specific emulators have been experimentally shown to exist in an experiment by Ernst Mach in 1896. In his experiment, the subjects’ eyes were prevented from moving and then they were exposed to a stimulus that would trigger a saccade. Most subjects reported experiencing their entire visual scene briefly being shift in the opposite direction of the stimulus. Mach thus inferred that there is a mechanism within the visual system producing an a priori estimate of future visual scenes based upon motor commands and current visual stimuli.[8] In a similar experiment on monkeys, this particular emulator was found to exist within a group of neurons within parietal cortex. Furthermore, it was discovered that these neurons began firing before the eye even moved. They only fired, however, if the motor command would result in a visual saccade. This suggests that visual emulators are modally specific.[9]
Emulators can be useful in a variety of industries and systems. Many factories and plants have central control systems that cannot afford to wait for certain information or updates. For example, control systems in chemical plants and reactors cannot wait for correctional signals from target systems to make alterations to chemical and material distribution and flow, thus accurate a priori predictions are made.[10]
Emulator-like programming has already been applied to many AI. systems. In 1988 Bartlett Mel created a robot he named Murphy. Murphy was a robot arm with three joints and whose main objective was to reach out and grasp an object while avoiding contact with several objects in the way. The robot could operate under two modes. Under the first mode, Murphy would simply move its arm around the workspace until it found a path of unobstructed movement to the intended object. After enough of these runs, Murphy learns to associate a motor command with a future visual output. This leads to its second mode of operation. Similar to an emulator, Murphy can solve the navigation problem offline. Receiving visual imagery of the workspace, Murphy can manipulate an internal visual grid with efference copies. Consequently, the robot can navigate through the obstacles and grasp the intended object in only one run.[1]
The Mars landing rovers have to utilize a similar manipulation of input. NASA has to overcome an approximately twenty-minute time delay in information exchange from Earth and the rover on Mars.[11]