6. Regression Analysis

preview_player
Показать описание
MIT 18.S096 Topics in Mathematics with Applications in Finance, Fall 2013
Instructor: Peter Kempthorne

This lecture introduces the mathematical and statistical foundations of regression analysis, particularly linear regression.

License: Creative Commons BY-NC-SA
Рекомендации по теме
Комментарии
Автор

This lecture, compared with the previous ones, provides a great example for the importance of using blackboard in math courses.

zhuangjiwang
Автор

Timestamps:
0:02:40 Overview
0:29:10 Ordinary Least Squares (OLS) Estimates
0:45:54 Gauss-Markov Theorem
0:54:47 Generalized Least Squares (GLS) Estimates
0:58:17 Normal Regression Models
1:19:25 Maximum Likelihood Estimation

SeikoVanPaath
Автор

This lecture convinced me that no amount of money you pay will guarantee you good teachers.

phillustrator
Автор

Holy, who knew that linear regression could get so complex.

lessmoneylessproblems
Автор

Awesome content at least my knowledge of matrices and statistics has been broadened on a larger scope. Thanks.

burnbush
Автор

@30:30 There's a mistake in the slides: the last term in the sum should be \beta_{p}, not \beta_{i, p}

caverac
Автор

The slides ruined the lectures. This is a math course. Writing on the board can save more time than talking and hand waving

liuauto
Автор

This a good example of how a course in maths should not be. Especially the notation is so dodgy - Random matrix X is referred to the matrix of realizations of X all the time. Ridiculous

KARABNAS
Автор

Good content, but should have really been handled in two lectures. A lot of time is spent on the basics, and then all of the more advanced details are simply glossed over due to a lack of time.

SergeiIakhnin
Автор

I miss Choongbum. This guy shouldn't be allowed to teach -.-

fustilarian
Автор

That was a brutally dense lecture with almost no real-life analogies. At that pace, you would need to be a linear algebra god to actually have the time to think about the statistical interpretations and applications of the expressions you are following.
Also, no use of much-needed graphs or technology whatsoever.

danielduranloosli
Автор

All his slides are in the textbook. Why the students in the classroom?

chunlangong
Автор

52:07 another typo? shouldn’t it be E[f’y]=f’E[y]?

btugh
Автор

Anyone interested in working through the course together?

WallaceRoseVincent
Автор

can anyone help me prove that:
if \epsilon ~ N_n(0_n, \sigma^2 \Sigma),
then \Sigma^(-0.5) \epsilon ~ N_n(0_n, \sigma^2 I_n)

lochestnut
Автор

Now I remember why I didn't like statistics and chose maths at university 😄

amandinelevecq
Автор

The previous lecturer did not cope with his work

okwrzmi
Автор

These slides seem riddled with mistakes, indices are lost or flipped or added where they shouldn't be. I was expecting more from MIT

CaseyVanBuren
Автор

With regression analy. The order of the fit justifies weighting. Seems to me neural networks are a much better subject to fits. Both are worthwhile. Neural nets provide options for rapid changes and simulation.

dankole
Автор

what it means and why it's there (y - theoretical y)^t * (y - theoretical y). why ^t is there?? what it means?

michalroesler