Stanford 머신러닝 Week2 작업: Linear Regression

4197 단어
Plotting the Data
data = load('ex1data1.txt'); % read comma separated data
X = data(:, 1); y = data(:, 2);
m = length(y);                     % number of training examples
plot(x, y, 'rx', 'MarkerSize', 10); % Plot the data,'rx' ,'MarkerSize' = 10 

ylabel('Profit in $10,000s'); % Set the y−axis label xlabel('Population of City in 10,000s'); % Set the x−axis label

Computing the cost J(θ)
l = length(X);
T = 1 / 2 / l * ( X * theta - y) .^ 2;
J = sum(T);

Gradient descent
t1 = theta(1) - alpha / m * sum( X * theta - y);
t2 = theta(2) - alpha / m * sum((X * theta - y) .* X(:,2));
theta = [t1; t2];

Feature Normalization
l = size(X_norm,2);
for i = 1:l
    mu(i) = mean(X_norm(:,i));
    sigma(i) = std(X_norm(:,i));
    X_norm(:,i) = (X_norm(:,i) - mu(i)) / sigma(i);
end;

Gradient Descent(muitiple variables)
m = length(y); % number of training examples
J_history = zeros(num_iters, 1);
n = size(X,2);
tmp = zeros(n,1);

for iter = 1:num_iters
    for i = 1:n
        tmp(i) = theta(i) - alpha / m * (X * theta - y)' * X(:,i); end; theta = tmp; J_history(iter) = computeCostMulti(X, y, theta); end 

Normal Equations
theta = pinv(X'*X)*X'*y

좋은 웹페이지 즐겨찾기