Structural learning

From Scholarpedia - Reading time: 5 min


Structural learning in motor control refers to a metalearning process whereby an agent extracts (abstract) invariants from its sensorimotor stream when experiencing a range of environments that share similar structure. Such invariants can then be exploited for faster generalization and learning-to-learn when experiencing novel, but related task environments.


Contents

[edit] Theoretical Background

[edit] Adaptive Control

Adaptive control theory is a branch of control theory in which some of the properties of the system that we wish to control are unknown and therefore need to be identified (Åström & Wittenmark, 1995; Sastry & Bodson, 1989). The primary step of system identification is aimed at determining what are the relevant inputs and outputs of the system, the dimensionality and form of the control problem (e.g. linear, nonlinear), the relation between the control variables, their range of potential values and the noise properties associated with them. This constitutes the problem of structural adaptive control. If the type of model is identified and only the parameters of this fixed model need to be determined, one deals with parametric adaptive control. The theoretical literature has primarily investigated parametric adaptive control as this allows the adaptive control problem to be turned into a parametric optimization problem for which established system identification techniques can be employed. For the problem of structural adaptive control no standard methods are available.


[edit] Bayesian Networks

A Bayesian Network is a graphical model to efficiently represent the joint distribution of a set of random variables (Pearl, 1988). In the case of sensorimotor learning these random variables could be $N$ variables for the sensory input $I_1, I_2, \ldots, I_N$ (e.g. retinal input, proprioceptive input, or later stages of neural processing) and $M$ variables for the motor output $O_1 ,O_2 , \ldots, O_M$ (e.g. muscle activations or earlier stages of neural processing). The dependencies between these variables are expressed by arrows in the network indicating the relation between any variable $X_i$ (such as $I_j$ or $O_k$) and its direct causal antecedents denoted as $parents(X_i)$ . Thus, depending on a particular network structure $S$ with model parameters $\theta_S$ the joint probability distribution \[ P(\mathbf{X}|S,\theta_S)=P(X_1, X_2, \ldots, X_{N+M}|S,\theta_S) \] can be split up into a product of conditional probabilities \[ P(\mathbf{X}|S,\theta_S) = \Pi_{i=1}^{N+M} P(X_i|parents(X_i),S,\theta_S) . \] The structure $S$ of the network determines the dependencies between the variables—that is the presence or absence of arrows in the network determining the factorization—while the parameters $\theta_S$ of that structure specify the dependencies quantitatively. Accordingly, structural learning refers to learning the topology of such a network, whereas parametric learning means determining quantitatively the strength of the connections given by the structure.

[edit] Experimental Evidence

[edit] Cognitive Science

Some of the earliest accounts of facilitated transfer between tasks with similar structure were given by Harry Harlow in his experiments on learning sets (Harlow, 1949), where monkeys had to learn different rules that linked stimuli and rewards. Harlow noted that most animal experiments had studied learning as an isolated episode on a single task and, therefore, did not reflect learning in everyday life which involves experience over many tasks. According to Harlow’s hypothesis, animals learn slowly by trial and error when they are faced with a novel task but once they have had experience with a set of related tasks they can generalize to solve novel tasks within this class, leading to apparently insightful behaviour. Harlow defined this as the formation of a learning set.

More recent studies in cognitive science have focused on how structural learning in hierarchical Bayesian models can be used to explain how humans generalize based on limited information, for example, when judging similarity between animals or numbers (Tenenbaum & Griffiths, 2001; Kemp, Perfors & Tenenbaum, 2004; Kemp & Tenenbaum, 2008; Gershman & Niv, 2010). In particular, Bayesian network models have been used to study human inference of causal dependencies between different observed variables (Tenenbaum & Griffiths, 2001a; Steyvers et al., 2003; Gopnik et al., 2004; Griffiths & Tenenbaum, 2005; Kemp & Tenenbaum, 2009). Structural learning and uncertainty over different structures have also been reported to explain human decision making in more complex sequential decision tasks that have previously been interpreted as exposing suboptimal behaviour on the part of the human decision makers (Acuna & Schrater, 2010).

[edit] Motor Control

Structure learning processes have also been reported in motor control (Braun, Mehring & Wolpert, 2010). In these experiments human participants experienced different visuomotor transformations between their actual and perceived hand movement in virtual reality environments (Braun et al., 2009a,b). For example, a linear transformation of the actual hand position $x_H$ would result in the perceived hand position $x_P$ \[ x_P = \left( \begin{array}{cc} a & b \\ c & d \end{array} \right) x_H . \] In this case changing the visuomotor transformation implies changing the parameters $a,b,c,d$ that span a four-dimensional parameter space. Classes of different transformations with common structure can then be defined as subspaces in this four-dimensional space---for example the class of visuomotor rotations with $a=\cos(\phi)$, $b=\sin(\phi)$, $c=-\sin(\phi)$ and $d=\cos(\phi)$ is a one-dimensional subspace parameterized by $\phi$ that is different from other structures such as shearings or scalings. Learning a structure refers to learning the general rule of a mapping (e.g. the class of rotation) and learning a parameter refers refers to learning the particular instance of a mapping (e.g. the rotation angle). In these studies it was found that experiencing many examples of the same structure facilitates learning of new motor mappings with the same structure, but different parameter. Such structure-specific facilitation is distinct from metalearning processes that are structure-independent, such as a generic increase in adaptability (Seidler, 2004).


[edit] References

  • Acuna D, Schrater, P (2010) Structure Learning in Human Sequential Decision-making. PLoS Computational Biology 6(12): e1001003
  • Åström KJ, Wittenmark B (1995) Adaptive control. 2nd ed. Reading, MA: Addison-Wesley
  • Braun DA, Aertsen A, Wolpert DM, Mehring C. (2009a) Motor task variation induces structural learning. Current Biology 19:1–6
  • Braun DA, Aertsen A, Wolpert DM, Mehring C. (2009b) Learning optimal adaptation strategies in unpredictable motor tasks. Journal of Neuroscience 29:6472–8.
  • Braun DA, Mehring C & Wolpert DM (2010) Structure learning in action. Behavioural Brain Research 206: 157-165
  • Gershman SJ, Niv Y. (2010) Learning latent structure: carving nature at its joints. Curr Opin Neurobiol. 20:251-256
  • Gopnik A, Glymour C, Sobel DM, Schulz LE, Kushnir T, Danks D. (2004) A theory of causal learning in children: causal maps and Bayes nets. Psychological Review 111:3–32.
  • Griffiths TL, Tenenbaum JB. (2005) Structure and strength in causal induction. Cognitive Psychology 51:334–84.
  • Harlow HF. (1949) The formation of learning sets. Psychological Review 56:51–65.
  • Kemp C, Perfors A, Tenenbaum JB. (2004) Learning domain structures. In: Proceedings of the twenty-sixth annual conference of the cognitive science society
  • Kemp C, Tenenbaum JB. (2009) Structured statistical models of inductive reasoning. Psychological Review 116:20–58
  • Kemp C, Tenenbaum JB. (2008) The discovery of structural form. Proceedings of the National Academy of Sciences of the United States of America 105:10687–92
  • Pearl J. (1988) Probabilistic reasoning in intelligent systems: networks of plausible inference. San Mateo, CA: Morgan Kaufmann Publishers
  • Sastry S, Bodson M (1989) Adaptive Control: Stability, Convergence and Robustness. Englewood Cliffs, NJ: Prentice-Hall Advanced Reference Series
  • Seidler RD. (2004) Multiple motor learning experiences enhance motor adaptability. Journal of Cognitive Neuroscience 16:65–73
  • Steyvers M, Tenenbaum JB, Wagenmakers EJ, Blum B. (2003) Inferring causal networks from observations and interventions. Cognitive Science 27:453–89.
  • Tenenbaum JB, Griffiths TL. (2001) Generalization, similarity, and Bayesian inference. The Behavioral and Brain Sciences 24:629–40
  • Tenenbaum JB, Griffiths TL. (2001a) Structure learning in human causal induction. In: Advances in neural information processing systems. MIT Press

Licensed under CC BY-SA 3.0 | Source: http://www.scholarpedia.org/article/Structural_learning
6 views | Status: cached on January 31 2022 19:39:53
↧ Download this article as ZWI file