What Am I Reading .. Today

3rd March

Neural Network Methods for Natural Language Processing by Yoav Goldberg

Learning

Beautiful explanation by Sudeshna Sarkar mam IIT KGP

  1. Hypothesis space
  2. Inductive Bias
    1. Restriction
    2. Preference
    3. Example - Occam’s Razor - Simplest consistent hypothesis about target function is always the best
  3. Generalization Error
    1. Bias - errors due to incorrect assumptions or restriction on hypothesis space
    2. Variance - model that you estimate from different training sets will differ from each other
  4. Highlight
    1. Learning is refining the hypothesis space
    2. In a particular learning problem, you first defined the hypothesis space that is the class of function that you are going to consider then given the data points, you try to come up with the best hypothesis given the data that you have.
    3. To Describe a Function, we have to decide the
      1. features of the vocabulary
      2. function class or type of function or language of the function