Dual control theory is a branch of control theory that deals with the control of systems whose characteristics are initially unknown.[1][2] It is called dual because in controlling such a system the controller's objectives are twofold:
These two objectives may be partly in conflict. In the context of reinforcement learning, this is known as the exploration-exploitation trade-off (e.g. Multi-armed bandit).
Dual control theory was developed by Alexander Aronovich Fel'dbaum in 1960. He showed that in principle the optimal solution can be found by dynamic programming, but this is often impractical; as a result a number of methods for designing sub-optimal dual controllers have been devised.
To use an analogy: if you are driving a new car you want to get to your destination cheaply and smoothly, but you also want to see how well the car accelerates, brakes and steers so as to get a better feel for how to drive it, so you will do some test manoeuvers for this purpose. Similarly a dual controller will inject a so-called probing (or exploration) signal into the system that may detract from short-term performance but will improve control in the future.
Original source: https://en.wikipedia.org/wiki/Dual control theory.
Read more |