Machine Learning - II. Linear Regression with One Variable (Week 1)

http://blog.csdn.net/pipisorry/article/details/43115525

机器学习Machine Learning - Andrew NG courses学习笔记

单变量线性回归Linear regression with one variable

模型表示Model representation

例子:

技术分享

这是Regression Problem(one of supervised learning)并且是Univariate linear regression (Linear regression with one variable.)
变量定义Notation(术语terminology):
m = Number of training examples
x’s = “input” variable  /  features
y’s = “output” variable  /  “target” variable

e.g.  (x,y)表示一个trainning example 而 (xi,yi)表示ith trainning example.

Model representation

技术分享

h代表假设hypothesis,h maps x‘s to y‘s(其实就是求解x到y的一个函数)



成本函数cost function

上个例子中h设为下图中的式子,我们要做的是

how to go about choosing these parameter values, theta zero and theta one.
技术分享

try to minimize the square difference between the output of the hypothesis and the actual price of the house.

技术分享

定义这个函数为(J函数就是cost function的一种)

技术分享

why do we minimize one by 2M?

going to minimize one by 2M.Putting the 2, the constant one half, in front it just makes some of the math a little easier.
why do we take the squares of the errors?

It turns out that the squared error cost function is a reasonable choice and will work well for most problems, for most regression problems. There are other cost functions that will work pretty well, but the squared error cost function is probably the most commonly used one for regression problems.



from:http://blog.csdn.net/pipisorry/article/details/43115525


郑重声明:本站内容如果来自互联网及其他传播媒体,其版权均属原媒体及文章作者所有。转载目的在于传递更多信息及用于网络分享,并不代表本站赞同其观点和对其真实性负责,也不构成任何其他建议。