How to grow a mind: statistics, structure, and abstraction [paper]
Last updated
Was this helpful?
Last updated
Was this helpful?
Tenenbaum et al. attempt to address three questions:
How does abstract knowledge guide learning and inference from sparse data?
What forms does abstract knowledge take, across different domains and tasks?
How is abstract knowledge itself acquired?
The overarching answer to these three questions is “hierarchical Bayesian models” (HBMs):
“Abstract knowledge is encoded in a probabilistic generative model, a kind of mental model that describes the causal processes in the world giving rise to the learner’s observations as well as unobserved or latent variables that support effective prediction and action if the learner can infer their hidden state.”
Abstract knowledge takes the form of structured, symbolic representations, such as “graphs, grammars, predicate logic, relational schemas, and functional programs”. The form of the knowledge itself can also be inferred via probabilistic generative models, through the use of “reltional data structures such as graph schemas, templates for graphs based on types of nodes, or probabilistic graph grammars”.
This is really where HBMs come into play: they “address the origins of hypothesis spaces and priors by positing not just a single level of hypotheses to explain the data but multiple levels: hypothesis spaces of hypothesis spaces, with priors on priors”.