Artelt, André: Introduction to Machine Learning - Supplementary notes. 2019
Inhalt
- Notation
- Basic concepts
- Regression
- Classification
- The classification problem
- Hypothesis
- Risk minimization
- Risk minimization and maximum likelihood
- Bayesian model averaging
- Outlook
- Exercises
- Bayes classifier
- K-nearest neighbors model
- K-nearest neighbors classifier
- K-nearest neighbors regression model
- Parametric vs. non-parametric models
- Outlook
- Exercises
- Linear regression
- Modeling
- Cost function
- Optimization
- Feature transformation
- Regularization
- Probabilistic interpretation
- Robust regression
- Sparsity regularization
- Elastic net
- Bayesian linear regression
- Kernel regression
- Outlook
- Exercises
- Logistic regression
- Modeling
- Optimization
- Separating hyperplane
- Feature transformation, regularization & kernelization
- Outlook
- Exercises
- Tree based models
- Evaluation
- Dimensionality reduction
- Clustering
- K-means
- Agglomerative Clustering
- DBSCAN
- Spectral clustering
- Gaussian mixture model
- Outlook
- Exercises
- Appendices
- Convex optimization
- Convex set
- Convex functions
- Convex optimization
- Closed form solution
- Gradient descent
- Intuition behind gradient descent
- Newton's method
- Quasi-Newton methods
- Choosing the step length
- Coordinate descent
- Linear programming
- Quadratic programming
- Lagrangian duality
- Outlook
- Exercises
- Probability theory & Statistical inference
- Basic probability
- Random variable
- Probability distributions
- Expectation
- Independence
- Moments
- Upper bounds
- Jensen's inequality
- Chebyshev's inequality
- Markov's inequality
- Chernoff bound
- Hoeffding's inequality
- Cauchy–Schwarz inequality
- Union bound
- Law of large numbers
- Central limit theorem
- Information theory
- Inference
- Bootstrapping
- Constructing estimators
- Outlook
- Exercises
