당신이 알아야 할 것들의 증가 속도는 당신이 아는 것들의 증가 속도보다 크다
  • 🌈고용 노동자의 소양 : 기여, 보상 그리고 뻔뻔함
  • LLM에게 좌뇌를
  • Going to AI
  • 세대론 유감
  • LIST
    • 어쩌다, 애자일스러움
    • Graph Databases - 올 것은 온다
    • Software Composability in Crypto
    • The Next Decade in AI: Four Steps Towards Robust Artificial Intelligence
    • AI 에듀테크 - 교육절벽은 없다?
    • The Design of Decisions
    • How Modern Game Theory is Influencing Multi-Agent Reinforcement Learning Systems
    • 블록체인과 콘텐츠의 왕국 (작성중)
    • How to grow a mind: statistics, structure, and abstraction [paper]
  • Text
    • 코딩, 몇살까지
    • '타다'가 드러낸 한국 4대 빅이슈
    • [딴지일보]MMORPG적 세계관과 능력주의자라는 환상
    • #n
    • 장면들
  • 임시
    • PAGE
      • 마이크로서비스 아키텍처
      • SHACL
      • IPLD / Multiformats
      • MPC
      • Semantic BlockChain
      • Java Stream
      • 인공지능과 교육
      • [scrap] The Future of Data: A Decentralized Graph Database
      • DAG - 방향성 비순환 그래프(Directed Acyclic Graph)
      • Untitled
Powered by GitBook
On this page

Was this helpful?

  1. LIST

How to grow a mind: statistics, structure, and abstraction [paper]

Previous블록체인과 콘텐츠의 왕국 (작성중)Next코딩, 몇살까지

Last updated 4 years ago

Was this helpful?

courses.csail.mit.edu/6.803/pdf/tenenbaum2011.pdf

Tenenbaum et al. attempt to address three questions:

  1. How does abstract knowledge guide learning and inference from sparse data?

  2. What forms does abstract knowledge take, across different domains and tasks?

  3. How is abstract knowledge itself acquired?

The overarching answer to these three questions is “hierarchical Bayesian models” (HBMs):

  1. “Abstract knowledge is encoded in a probabilistic generative model, a kind of mental model that describes the causal processes in the world giving rise to the learner’s observations as well as unobserved or latent variables that support effective prediction and action if the learner can infer their hidden state.”

  2. Abstract knowledge takes the form of structured, symbolic representations, such as “graphs, grammars, predicate logic, relational schemas, and functional programs”. The form of the knowledge itself can also be inferred via probabilistic generative models, through the use of “reltional data structures such as graph schemas, templates for graphs based on types of nodes, or probabilistic graph grammars”.

  3. This is really where HBMs come into play: they “address the origins of hypothesis spaces and priors by positing not just a single level of hypotheses to explain the data but multiple levels: hypothesis spaces of hypothesis spaces, with priors on priors”.

How has the perspective of "How to Grow a Mind: Statistics, Structure, and Abstraction" changed in the current times with Deep Learning?Quora
Logo
How to grow a mind: statistics, structure, and abstraction
SearchCambridge Core
Logo