cs231-assignment2-총괄-코드

40975 단어 cs231n

cs231-assignment2-총결산

  • 테크닉

  • dx는 x모양과 같이 모두 0인 행렬로 dx리 x>0의 위치를 1
  • 로 설정합니다.
      dx = np.zeros_like(x, dtype=float)
      dx[x > 0] = 1
    
  • dx의 형상을 x의 형상으로 바꾸기
  •   dx = np.reshape(dx, x.shape)
    
  • batch norm은 특징값에 대한 귀일화이지 이미지에 대한 귀일화가 아니다
  • 계산된 사다리를 먼저 간소화한 다음에 코드로 써서 계산량을 줄인다
  •   x_norm, gamma, beta, sample_mean, sample_var, x, eps = cache
      dnorm = gamma * dout
      dvar = -0.5 * np.sum(dnorm * (x - sample_mean), axis=0) * np.power(sample_var + eps, -3/2)
      dmean = -1 * np.sum(dnorm * np.power(sample_var + eps, -1/2), axis=0) - 2 * dvar * np.mean(x - sample_mean, axis=0)
      dgamma = np.sum(dout * x_norm, axis=0)
      dbeta = np.sum(dout)
    
  • mask
  • mask = np.random.rand(*x.shape) < p
    
  • np의 제로 작업
  • x_padded = np.pad(x, ((0, 0), (0, 0), (pad, pad), (pad, pad)), mode='constant')
    
  • 볼륨 네트워크의 input은 N 개,batch output도 N 개
  •   Input:
      - x: Input data of shape (N, C, H, W)
      - w: Filter weights of shape (F, C, HH, WW)
      - b: Biases, of shape (F,)
      Output:
      - out: Output data, of shape (N, F, H', W') where H' and W' are given by
        H' = 1 + (H + 2 * pad - HH) / stride
        W' = 1 + (W + 2 * pad - WW) / stride
    
  • 최대값은 1이고 나머지는 0
  •    m = np.max(win)
       (m== win)
    
  • 조인트 효과
  • layers_dims = [input_dim] + hidden_dims + [num_classes]
    
  • 왜 매끄러운 평균을 사용합니까? assignment1에서train과test는 전체train집합의 평균을 사용하는데 여기에는 매끄러운 평균을 사용했습니다.https://www.zhihu.com/question/55621104
  • 순서: wx+b - batch norm - relu - dropout
  • code: layers.py
    import numpy as np
    
    
    def affine_forward(x, w, b):
    
      out = None
    
      N = x.shape[0]
      x_reshape = x.reshape(N, -1)
      out = np.dot(x_reshape, w) + b
    
      cache = (x, w, b)
      return out, cache
    
    
    def affine_backward(dout, cache):
      """
      Computes the backward pass for an affine layer.
    
      Inputs:
      - dout: Upstream derivative, of shape (N, M)
      - cache: Tuple of:
        - x: Input data, of shape (N, d_1, ... d_k)
        - w: Weights, of shape (D, M)
    
      Returns a tuple of:
      - dx: Gradient with respect to x, of shape (N, d1, ..., d_k)
      - dw: Gradient with respect to w, of shape (D, M)
      - db: Gradient with respect to b, of shape (M,)
      """
      x, w, b = cache
      dx, dw, db = None, None, None
    
      N, M = dout.shape
      x_reshape = x.reshape(N, -1)
      dw = np.transpose(x_reshape).dot(dout)
      dx_reshape = dout.dot(np.transpose(dw))
      dx = np.reshape(dx_reshape, x.shape)
      db = np.sum(dout, axis=0, keepdims=True)
      
      return dx, dw, db
    
    
    def relu_forward(x):
      """
      Computes the forward pass for a layer of rectified linear units (ReLUs).
    
      Input:
      - x: Inputs, of any shape
    
      Returns a tuple of:
      - out: Output, of the same shape as x
      - cache: x
      """
      out = None
    
      out = np.maximum(x, 0)
    
      cache = x
      return out, cache
    
    
    def relu_backward(dout, cache):
      """
      Computes the backward pass for a layer of rectified linear units (ReLUs).
    
      Input:
      - dout: Upstream derivatives, of any shape
      - cache: Input x, of same shape as dout
    
      Returns:
      - dx: Gradient with respect to x
      """
      dx, x = None, cache
     
      dx = dout
      dx[x <= 0] = 0
      
      return dx
    
    
    def batchnorm_forward(x, gamma, beta, bn_param):
      """
      Forward pass for batch normalization.
      
      During training the sample mean and (uncorrected) sample variance are
      computed from minibatch statistics and used to normalize the incoming data.
      During training we also keep an exponentially decaying running mean of the mean
      and variance of each feature, and these averages are used to normalize data
      at test-time.
    
      At each timestep we update the running averages for mean and variance using
      an exponential decay based on the momentum parameter:
    
      running_mean = momentum * running_mean + (1 - momentum) * sample_mean
      running_var = momentum * running_var + (1 - momentum) * sample_var
    
      Note that the batch normalization paper suggests a different test-time
      behavior: they compute sample mean and variance for each feature using a
      large number of training images rather than using a running average. For
      this implementation we have chosen to use running averages instead since
      they do not require an additional estimation step; the torch7 implementation
      of batch normalization also uses running averages.
    
      Input:
      - x: Data of shape (N, D)
      - gamma: Scale parameter of shape (D,)
      - beta: Shift paremeter of shape (D,)
      - bn_param: Dictionary with the following keys:
        - mode: 'train' or 'test'; required
        - eps: Constant for numeric stability
        - momentum: Constant for running mean / variance.
        - running_mean: Array of shape (D,) giving running mean of features
        - running_var Array of shape (D,) giving running variance of features
    
      Returns a tuple of:
      - out: of shape (N, D)
      - cache: A tuple of values needed in the backward pass
      """
      mode = bn_param['mode']
      eps = bn_param.get('eps', 1e-5)
      momentum = bn_param.get('momentum', 0.9)
    
      N, D = x.shape
      running_mean = bn_param.get('running_mean', np.zeros(D, dtype=x.dtype))
      running_var = bn_param.get('running_var', np.zeros(D, dtype=x.dtype))
    
      out, cache = None, None
      if mode == 'train':
       
        sample_mean = np.mean(x, axis=0, keepdims=True)
        sample_var = np.var(x, axis=0, keepdims=True)
        x_norm = (x - sample_mean)/np.sqrt(sample_var + eps)
        out = gamma * x_norm + beta
        cache = (x_normalized, gamma, beta, sample_mean, sample_var, x, eps)
        running_mean = momentum * running_mean + (1 - momentum) * sample_mean
        running_var = momentum * running_var + (1 - momentum) * sample_var
    
     
      elif mode == 'test':
       
        x_norm = (x - sample_mean) / np.sqrt(sample_var + eps)
        out = gamma * x_norm + beta
       
      else:
        raise ValueError('Invalid forward batchnorm mode "%s"' % mode)
    
      # Store the updated running means back into bn_param
      bn_param['running_mean'] = running_mean
      bn_param['running_var'] = running_var
    
      return out, cache
    
    
    def batchnorm_backward(dout, cache):
      """
      Backward pass for batch normalization.
      
      For this implementation, you should write out a computation graph for
      batch normalization on paper and propagate gradients backward through
      intermediate nodes.
      
      Inputs:
      - dout: Upstream derivatives, of shape (N, D)
      - cache: Variable of intermediates from batchnorm_forward.
      
      Returns a tuple of:
      - dx: Gradient with respect to inputs x, of shape (N, D)
      - dgamma: Gradient with respect to scale parameter gamma, of shape (D,)
      - dbeta: Gradient with respect to shift parameter beta, of shape (D,)
      """
      dx, dgamma, dbeta = None, None, None
    
      N, D = x.shape
      x_norm, gamma, beta, sample_mean, sample_var, x, eps = cache
      dnorm = gamma * dout
      dvar = -0.5 * np.sum(dnorm * (x - sample_mean), axis=0) * np.power(sample_var + eps, -3/2)
      dmean = -1.0 * np.sum(dnorm * np.power(sample_var + eps, -1/2), axis=0) - 2.0 * dvar * np.mean(x - sample_mean, axis=0)
      dgamma = np.sum(dout * x_norm, axis=0)
      dbeta = np.sum(dout, axis=0)
      dx = dnorm * np.power(sample_var + eps, -1/2) + 2.0/N * dnorm * (x - sample_mean) + 1.0 * dmean/N
    
    
      return dx, dgamma, dbeta
    
    
    def batchnorm_backward_alt(dout, cache):
      """
      Alternative backward pass for batch normalization.
      
      For this implementation you should work out the derivatives for the batch
      normalizaton backward pass on paper and simplify as much as possible. You
      should be able to derive a simple expression for the backward pass.
      
      Note: This implementation should expect to receive the same cache variable
      as batchnorm_backward, but might not use all of the values in the cache.
      
      Inputs / outputs: Same as batchnorm_backward
      """
      dx, dgamma, dbeta = None, None, None
    
      x_normalized, gamma, beta, sample_mean, sample_var, x, eps = cache
      N, D = x.shape
      dx_normalized = dout * gamma  # [N,D]
      x_mu = x - sample_mean  # [N,D]
      sample_std_inv = 1.0 / np.sqrt(sample_var + eps)  # [1,D]
      dsample_var = -0.5 * np.sum(dx_normalized * x_mu, axis=0, keepdims=True) * sample_std_inv ** 3
      dsample_mean = -1.0 * np.sum(dx_normalized * sample_std_inv, axis=0, keepdims=True) - \
                     2.0 * dsample_var * np.mean(x_mu, axis=0, keepdims=True)
      dx1 = dx_normalized * sample_std_inv
      dx2 = 2.0 / N * dsample_var * x_mu
      dx = dx1 + dx2 + 1.0 / N * dsample_mean
      dgamma = np.sum(dout * x_normalized, axis=0, keepdims=True)
      dbeta = np.sum(dout, axis=0, keepdims=True)
     
      
      return dx, dgamma, dbeta
    
    
    def dropout_forward(x, dropout_param):
      """
      Performs the forward pass for (inverted) dropout.
    
      Inputs:
      - x: Input data, of any shape
      - dropout_param: A dictionary with the following keys:
        - p: Dropout parameter. We drop each neuron output with probability p.
        - mode: 'test' or 'train'. If the mode is train, then perform dropout;
          if the mode is test, then just return the input.
        - seed: Seed for the random number generator. Passing seed makes this
          function deterministic, which is needed for gradient checking but not in
          real networks.
    
      Outputs:
      - out: Array of the same shape as x.
      - cache: A tuple (dropout_param, mask). In training mode, mask is the dropout
        mask that was used to multiply the input; in test mode, mask is None.
      """
      p, mode = dropout_param['p'], dropout_param['mode']
      if 'seed' in dropout_param:
        np.random.seed(dropout_param['seed'])
      mask = None
      out = None
    
      if mode == 'train':
       
        mask = np.random.rand(*x.shape) < p / p
        out = x * mask
      
      elif mode == 'test':
    
        out = x
      
    
      cache = (dropout_param, mask)
      out = out.astype(x.dtype, copy=False)
    
      return out, cache
    
    
    def dropout_backward(dout, cache):
      """
      Perform the backward pass for (inverted) dropout.
    
      Inputs:
      - dout: Upstream derivatives, of any shape
      - cache: (dropout_param, mask) from dropout_forward.
      """
      dropout_param, mask = cache
      mode = dropout_param['mode']
      
      dx = None
      if mode == 'train':
      
        dx = dout * mask
        
      elif mode == 'test':
        dx = dout
      return dx
    
    
    def conv_forward_naive(x, w, b, conv_param):
      """
      A naive implementation of the forward pass for a convolutional layer.
    
      The input consists of N data points, each with C channels, height H and width
      W. We convolve each input with F different filters, where each filter spans
      all C channels and has height HH and width HH.
    
      Input:
      - x: Input data of shape (N, C, H, W)
      - w: Filter weights of shape (F, C, HH, WW)
      - b: Biases, of shape (F,)
      - conv_param: A dictionary with the following keys:
        - 'stride': The number of pixels between adjacent receptive fields in the
          horizontal and vertical directions.
        - 'pad': The number of pixels that will be used to zero-pad the input.
    
      Returns a tuple of:
      - out: Output data, of shape (N, F, H', W') where H' and W' are given by
        H' = 1 + (H + 2 * pad - HH) / stride
        W' = 1 + (W + 2 * pad - WW) / stride
      - cache: (x, w, b, conv_param)
      """
      out = None
      stride = conv_param['stride']
      pad = conv_param['pad']
      N, C, H, W = x.shape
      F, C, HH, WW = w.shape
      x_padded = np.pad(x, ((0, 0), (0, 0), (pad, pad), (pad, pad)), mode='constant')
      H_new = 1 + (H + 2 * pad - HH) / stride
      W_new = 1 + (W + 2 * pad - WW) / stride
      out = np.zeros((N, F, H_new, W_new))
      for i in xrange(N):  # ith image
        for f in xrange(F):  # fth filter
          for j in xrange(H_new):
            for k in xrange(W_new):
              out[i, f, j, k] = np.sum(x_padded[i, :, j*stride: j*stride+HH, k*stride: k*stride+WW] * w[f]) + b[f]
      cache = (x, w, b, conv_param)
      return out, cache
    
    
    def conv_backward_naive(dout, cache):
      """
      A naive implementation of the backward pass for a convolutional layer.
    
      Inputs:
      - dout: Upstream derivatives.
      - cache: A tuple of (x, w, b, conv_param) as in conv_forward_naive
    
      Returns a tuple of:
      - dx: Gradient with respect to x
      - dw: Gradient with respect to w
      - db: Gradient with respect to b
      """
      dx, dw, db = None, None, None
    
      x, w, b, conv_param = cache
      N, C, H, W = x.shape
      F, C, HH, WW = w.shape
      pad = conv_param['pad']
      stride = conv_param['stride']
      H_new = 1 + (H + 2 * pad - HH) / stride
      W_new = 1 + (W + 2 * pad - WW) / stride
      N, C, H, W = x.shape
      F, C, HH, WW = w.shape
      x_padded = np.pad(x, ((0, 0), (0, 0), (pad, pad), (pad, pad)), mode='constant')
      dx_padded = np.zeros_like(x_padded, dtype = float)
      dx = np.zeros_like(x, dtype=float)
      dw = np.zeros_like(w, dtype=float)
      db = np.zeros_like(b, dtype=float)
      for i in xrange(N):  # ith image
        for f in xrange(F):  # fth filter
          for j in xrange(H_new):
            for k in xrange(W_new):
              dx_padded[i, :, j*stride: j*stride+HH, k*stride: k*stride+WW] += dout[i, f, j ,k] * w[f]
              dw[f] += dout[i, f, j, k] * x_padded[i, :, j*stride: j*stride+HH, k*stride: k*stride+WW]
              db[f] += dout[i, f, j, k]
    
      return dx, dw, db
    
    
    def max_pool_forward_naive(x, pool_param):
      """
      A naive implementation of the forward pass for a max pooling layer.
    
      Inputs:
      - x: Input data, of shape (N, C, H, W)
      - pool_param: dictionary with the following keys:
        - 'pool_height': The height of each pooling region
        - 'pool_width': The width of each pooling region
        - 'stride': The distance between adjacent pooling regions
    
      Returns a tuple of:
      - out: Output data
      - cache: (x, pool_param)
      """
      out = None
      N, C, H, W = np.shape(x)
      pool_height, pool_width, stride = pool_param['pool_height'], pool_param['pool_width'], pool_param['stride']
      H_new = 1 + (H - pool_height) / stride
      W_new = 1 + (W - pool_width) / stride
      out = np.zeros((N, C, H_new, W_new))
    
    
    
      for i in xrange(N):  # ith image
        for f in xrange(C):  # fth filter
          for j in xrange(H_new):
            for k in xrange(W_new):
              out[i, f, j, k] = np.max(x[i, f, j*stride: j*stride+pool_height, k*stride: k*stride+pool_width])
    
      cache = (x, pool_param)
      return out, cache
    
    
    def max_pool_backward_naive(dout, cache):
      """
      A naive implementation of the backward pass for a max pooling layer.
    
      Inputs:
      - dout: Upstream derivatives
      - cache: A tuple of (x, pool_param) as in the forward pass.
    
      Returns:
      - dx: Gradient with respect to x
      """
      dx = None
      x, pool_param = cache
      N, C, H, W = np.shape(x)
      pool_height, pool_width, stride = pool_param['pool_height'], pool_param['pool_width'], pool_param['stride']
      H_new = 1 + (H - pool_height) / stride
      W_new = 1 + (W - pool_width) / stride
      dx = np.zeros_like(x)
    
      for i in xrange(N):  # ith image
        for f in xrange(C):  # fth filter
          for j in xrange(H_new):
            for k in xrange(W_new):
              win = x[i, f, j * stride: j * stride + pool_height, k * stride: k * stride + pool_width]
              m = np.max(win)
              dx[i, f, j * stride: j * stride + pool_height, k * stride: k * stride + pool_width] += (m== win) * dout[i, f, j, k]
     
      return dx
    
    
    def spatial_batchnorm_forward(x, gamma, beta, bn_param):
      """
      Computes the forward pass for spatial batch normalization.
      
      Inputs:
      - x: Input data of shape (N, C, H, W)
      - gamma: Scale parameter, of shape (C,)
      - beta: Shift parameter, of shape (C,)
      - bn_param: Dictionary with the following keys:
        - mode: 'train' or 'test'; required
        - eps: Constant for numeric stability
        - momentum: Constant for running mean / variance. momentum=0 means that
          old information is discarded completely at every time step, while
          momentum=1 means that new information is never incorporated. The
          default of momentum=0.9 should work well in most situations.
        - running_mean: Array of shape (D,) giving running mean of features
        - running_var Array of shape (D,) giving running variance of features
        
      Returns a tuple of:
      - out: Output data, of shape (N, C, H, W)
      - cache: Values needed for the backward pass
      """
      out, cache = None, None
    
     
      N, C, H, W = x.shape
      x_new = x.transpose(0, 2, 3, 1).reshape(N*H*W, C)
      out_new, cache = batchnorm_forward(x_new, gamma, beta, bn_param)
      out = out_new.reshape(N, H, W, C).transpose(0, 3, 1, 2)
     
    
      return out, cache
    
    
    def spatial_batchnorm_backward(dout, cache):
      """
      Computes the backward pass for spatial batch normalization.
      
      Inputs:
      - dout: Upstream derivatives, of shape (N, C, H, W)
      - cache: Values from the forward pass
      
      Returns a tuple of:
      - dx: Gradient with respect to inputs, of shape (N, C, H, W)
      - dgamma: Gradient with respect to scale parameter, of shape (C,)
      - dbeta: Gradient with respect to shift parameter, of shape (C,)
      """
      dx, dgamma, dbeta = None, None, None
    
    
      N, C, H, W = dout.shape
      dout_new = dout.transpose(0, 2, 3, 1).reshape(N * H * W, C)
      dx_new, dgamma, dbeta = batchnorm_backward(dout_new, cache)
      dx = dx_new.reshape(N, H, W, C).transpose(0, 3, 1, 2)
      
    
      return dx, dgamma, dbeta
      
    
    def svm_loss(x, y):
      """
      Computes the loss and gradient using for multiclass SVM classification.
    
      Inputs:
      - x: Input data, of shape (N, C) where x[i, j] is the score for the jth class
        for the ith input.
      - y: Vector of labels, of shape (N,) where y[i] is the label for x[i] and
        0 <= y[i] < C
    
      Returns a tuple of:
      - loss: Scalar giving the loss
      - dx: Gradient of the loss with respect to x
      """
      N = x.shape[0]
      correct_class_scores = x[np.arange(N), y]
      margins = np.maximum(0, x - correct_class_scores[:, np.newaxis] + 1.0)
      margins[np.arange(N), y] = 0
      loss = np.sum(margins) / N
      num_pos = np.sum(margins > 0, axis=1)
      dx = np.zeros_like(x)
      dx[margins > 0] = 1
      dx[np.arange(N), y] -= num_pos
      dx /= N
      return loss, dx
    
    
    def softmax_loss(x, y):
      """
      Computes the loss and gradient for softmax classification.
    
      Inputs:
      - x: Input data, of shape (N, C) where x[i, j] is the score for the jth class
        for the ith input.
      - y: Vector of labels, of shape (N,) where y[i] is the label for x[i] and
        0 <= y[i] < C
    
      Returns a tuple of:
      - loss: Scalar giving the loss
      - dx: Gradient of the loss with respect to x
      """
      probs = np.exp(x - np.max(x, axis=1, keepdims=True))
      probs /= np.sum(probs, axis=1, keepdims=True)
      N = x.shape[0]
      loss = -np.sum(np.log(probs[np.arange(N), y])) / N
      dx = probs.copy()
      dx[np.arange(N), y] -= 1
      dx /= N
      return loss, dx
    
    

    layer_utils.py: 다중 레이어 결합fastlayers.py: 권적층을 최적화하고 순환solver가 필요 없습니다.py: 최적화된 맨 윗부분optim.py: 최적화된 모듈
    import numpy as np
    
    """
    This file implements various first-order update rules that are commonly used for
    training neural networks. Each update rule accepts current weights and the
    gradient of the loss with respect to those weights and produces the next set of
    weights. Each update rule has the same interface:
    
    def update(w, dw, config=None):
    
    Inputs:
      - w: A numpy array giving the current weights.
      - dw: A numpy array of the same shape as w giving the gradient of the
        loss with respect to w.
      - config: A dictionary containing hyperparameter values such as learning rate,
        momentum, etc. If the update rule requires caching values over many
        iterations, then config will also hold these cached values.
    
    Returns:
      - next_w: The next point after the update.
      - config: The config dictionary to be passed to the next iteration of the
        update rule.
    
    NOTE: For most update rules, the default learning rate will probably not perform
    well; however the default values of the other hyperparameters should work well
    for a variety of different problems.
    
    For efficiency, update rules may perform in-place updates, mutating w and
    setting next_w equal to w.
    """
    
    
    def sgd(w, dw, config=None):
      """
      Performs vanilla stochastic gradient descent.
    
      config format:
      - learning_rate: Scalar learning rate.
      """
      if config is None: config = {}
      config.setdefault('learning_rate', 1e-2)
    
      w -= config['learning_rate'] * dw
      return w, config
    
    
    def sgd_momentum(w, dw, config=None):
      """
      Performs stochastic gradient descent with momentum.
    
      config format:
      - learning_rate: Scalar learning rate.
      - momentum: Scalar between 0 and 1 giving the momentum value.
        Setting momentum = 0 reduces to sgd.
      - velocity: A numpy array of the same shape as w and dw used to store a moving
        average of the gradients.
      """
      if config is None: config = {}
      config.setdefault('learning_rate', 1e-2)
      config.setdefault('momentum', 0.9)
      v = config.get('velocity', np.zeros_like(w))
      
      next_w = None
      #############################################################################
      # TODO: Implement the momentum update formula. Store the updated value in   #
      # the next_w variable. You should also use and update the velocity v.       #
      #############################################################################
      v = config['momentum'] * v - config['learning_rate'] * dw
      next_w = w + v
      #############################################################################
      #                             END OF YOUR CODE                              #
      #############################################################################
      config['velocity'] = v
    
      return next_w, config
    
    
    
    def rmsprop(x, dx, config=None):
      """
      Uses the RMSProp update rule, which uses a moving average of squared gradient
      values to set adaptive per-parameter learning rates.
    
      config format:
      - learning_rate: Scalar learning rate.
      - decay_rate: Scalar between 0 and 1 giving the decay rate for the squared
        gradient cache.
      - epsilon: Small scalar used for smoothing to avoid dividing by zero.
      - cache: Moving average of second moments of gradients.
      """
      if config is None: config = {}
      config.setdefault('learning_rate', 1e-2)
      config.setdefault('decay_rate', 0.99)
      config.setdefault('epsilon', 1e-8)
      config.setdefault('cache', np.zeros_like(x))
    
      next_x = None
      #############################################################################
      # TODO: Implement the RMSprop update formula, storing the next value of x   #
      # in the next_x variable. Don't forget to update cache value stored in      #  
      # config['cache'].                                                          #
      #############################################################################
      config['cache'] = config['decay_rate'] * config['cache'] + (1 - config['decay_rate']) * (dx**2)
      next_x = x - config['learning_rate'] * dx / (np.sqrt(config['cache']) + config['epsilon'])
      #############################################################################
      #                             END OF YOUR CODE                              #
      #############################################################################
    
      return next_x, config
    
    
    def adam(x, dx, config=None):
      """
      Uses the Adam update rule, which incorporates moving averages of both the
      gradient and its square and a bias correction term.
    
      config format:
      - learning_rate: Scalar learning rate.
      - beta1: Decay rate for moving average of first moment of gradient.
      - beta2: Decay rate for moving average of second moment of gradient.
      - epsilon: Small scalar used for smoothing to avoid dividing by zero.
      - m: Moving average of gradient.
      - v: Moving average of squared gradient.
      - t: Iteration number.
      """
      if config is None: config = {}
      config.setdefault('learning_rate', 1e-3)
      config.setdefault('beta1', 0.9)
      config.setdefault('beta2', 0.999)
      config.setdefault('epsilon', 1e-8)
      config.setdefault('m', np.zeros_like(x))
      config.setdefault('v', np.zeros_like(x))
      config.setdefault('t', 0)
      
      next_x = None
      #############################################################################
      # TODO: Implement the Adam update formula, storing the next value of x in   #
      # the next_x variable. Don't forget to update the m, v, and t variables     #
      # stored in config.                                                         #
      #############################################################################
      m = config['m']
      beta1 = config['beta1']
      beta2 = config['beta2']
      eps = config['epsilon']
      v = config['v']
      learning_rate = config['learning_rate']
      config['t'] += 1
      m = beta1 * m + (1 - beta1) * dx
      v = beta2 * v + (1 - beta2) * (dx ** 2)
      m_bias = m / (1 - beta1 ** t)
      v_bias = v / (1 - beta2 ** t)
      next_x = x - learning_rate * m_bias / (np.sqrt(v_bias) + eps)
      config['m'] = m
      config['v'] = v
      #############################################################################
      #                             END OF YOUR CODE                              #
      #############################################################################
      
      return next_x, config
    

    fc_net.py: 전체 연결층의 맨 윗부분 설계
    import numpy as np
    
    from layers import *
    from layer_utils import *
    
    
    class TwoLayerNet(object):
      """
      A two-layer fully-connected neural network with ReLU nonlinearity and
      softmax loss that uses a modular layer design. We assume an input dimension
      of D, a hidden dimension of H, and perform classification over C classes.
      
      The architecure should be affine - relu - affine - softmax.
    
      Note that this class does not implement gradient descent; instead, it
      will interact with a separate Solver object that is responsible for running
      optimization.
    
      The learnable parameters of the model are stored in the dictionary
      self.params that maps parameter names to numpy arrays.
      """
      
      def __init__(self, input_dim=3*32*32, hidden_dim=100, num_classes=10,
                   weight_scale=1e-3, reg=0.0):
        """
        Initialize a new network.
    
        Inputs:
        - input_dim: An integer giving the size of the input
        - hidden_dim: An integer giving the size of the hidden layer
        - num_classes: An integer giving the number of classes to classify
        - dropout: Scalar between 0 and 1 giving dropout strength.
        - weight_scale: Scalar giving the standard deviation for random
          initialization of the weights.
        - reg: Scalar giving L2 regularization strength.
        """
        self.params = {}
        self.reg = reg
        
        ############################################################################
        # TODO: Initialize the weights and biases of the two-layer net. Weights    #
        # should be initialized from a Gaussian with standard deviation equal to   #
        # weight_scale, and biases should be initialized to zero. All weights and  #
        # biases should be stored in the dictionary self.params, with first layer  #
        # weights and biases using the keys 'W1' and 'b1' and second layer weights #
        # and biases using the keys 'W2' and 'b2'.                                 #
        ############################################################################
        self.params['W1'] = weight_scale * np.random.randn(input_dim, hidden_dim)
        self.params['b1'] = np.zeros((1, hidden_dim))
        self.params['W2'] = weight_scale * np.random.randn(hidden_dim, num_classes)
        self.params['b2'] = np.zeros((1, num_classes))
        ############################################################################
        #                             END OF YOUR CODE                             #
        ############################################################################
    
    
      def loss(self, X, y=None):
        """
        Compute loss and gradient for a minibatch of data.
    
        Inputs:
        - X: Array of input data of shape (N, d_1, ..., d_k)
        - y: Array of labels, of shape (N,). y[i] gives the label for X[i].
    
        Returns:
        If y is None, then run a test-time forward pass of the model and return:
        - scores: Array of shape (N, C) giving classification scores, where
          scores[i, c] is the classification score for X[i] and class c.
    
        If y is not None, then run a training-time forward and backward pass and
        return a tuple of:
        - loss: Scalar value giving the loss
        - grads: Dictionary with the same keys as self.params, mapping parameter
          names to gradients of the loss with respect to those parameters.
        """  
        scores = None
        W1, b1 = self.params['W1'], self.params['b1']
        W2, b2 = self.params['W2'], self.params['b2']
        N = X.shape[0]
        h1, cache1 = affine_relu_forward(X, W1, b1)
        out, cache2 = affine_forward(h1, W2, b2)
        scores = out  # (N,C)
        # If y is None then we are in test mode so just return scores
        if y is None:
          return scores
    
        loss, grads = 0, {}
        data_loss, dscores = softmax_loss(scores, y)
        reg_loss = 0.5 * self.reg * np.sum(W1 * W1) + 0.5 * self.reg * np.sum(W2 * W2)
        loss = data_loss + reg_loss
    
        # Backward pass: compute gradients
    
        dh1, dW2, db2 = affine_backward(dscores, cache2)
        dX, dW1, db1 = affine_relu_backward(dh1, cache1)
      # Add the regularization gradient contribution
        dW2 += self.reg * W2
        dW1 += self.reg * W1
        grads['W1'] = dW1
        grads['b1'] = db1
        grads['W2'] = dW2
        grads['b2'] = db2
    
        return loss, grads
    
    
    class FullyConnectedNet(object):
      """
      A fully-connected neural network with an arbitrary number of hidden layers,
      ReLU nonlinearities, and a softmax loss function. This will also implement
      dropout and batch normalization as options. For a network with L layers,
      the architecture will be
      
      {affine - [batch norm] - relu - [dropout]} x (L - 1) - affine - softmax
      
      where batch normalization and dropout are optional, and the {...} block is
      repeated L - 1 times.
      
      Similar to the TwoLayerNet above, learnable parameters are stored in the
      self.params dictionary and will be learned using the Solver class.
      """
    
      def __init__(self, hidden_dims, input_dim=3*32*32, num_classes=10,
                   dropout=0, use_batchnorm=False, reg=0.0,
                   weight_scale=1e-2, dtype=np.float32, seed=None):
        """
        Initialize a new FullyConnectedNet.
        
        Inputs:
        - hidden_dims: A list of integers giving the size of each hidden layer.
        - input_dim: An integer giving the size of the input.
        - num_classes: An integer giving the number of classes to classify.
        - dropout: Scalar between 0 and 1 giving dropout strength. If dropout=0 then
          the network should not use dropout at all.
        - use_batchnorm: Whether or not the network should use batch normalization.
        - reg: Scalar giving L2 regularization strength.
        - weight_scale: Scalar giving the standard deviation for random
          initialization of the weights.
        - dtype: A numpy datatype object; all computations will be performed using
          this datatype. float32 is faster but less accurate, so you should use
          float64 for numeric gradient checking.
        - seed: If not None, then pass this random seed to the dropout layers. This
          will make the dropout layers deteriminstic so we can gradient check the
          model.
        """
        self.use_batchnorm = use_batchnorm
        self.use_dropout = dropout > 0
        self.reg = reg
        self.num_layers = 1 + len(hidden_dims)
        self.dtype = dtype
        self.params = {}
    
        ############################################################################
        # TODO: Initialize the parameters of the network, storing all values in    #
        # the self.params dictionary. Store weights and biases for the first layer #
        # in W1 and b1; for the second layer use W2 and b2, etc. Weights should be #
        # initialized from a normal distribution with standard deviation equal to  #
        # weight_scale and biases should be initialized to zero.                   #
        #                                                                          #
        # When using batch normalization, store scale and shift parameters for the #
        # first layer in gamma1 and beta1; for the second layer use gamma2 and     #
        # beta2, etc. Scale parameters should be initialized to one and shift      #
        # parameters should be initialized to zero.                                #
        ############################################################################
        layers_dims = [input_dim] + hidden_dims + [num_classes]
        for i in range(self.num_layers):
          self.params['W'+str(i+1)] = weight_scale * np.random.randn(layers_dims[i], layers_dims[i+1])
          self.params['b' + str(i + 1)] = np.zeros(1, layers_dims[i+1], dtype=dtype)
          if self.use_batchnorm and i < len(hidden_dims):
            self.params['gamma' + str(i + 1)] = np.ones(1, layers_dims[i+1], dtype=dtype)
            self.params['beta' + str(i + 1)] = np.zeros(1, layers_dims[i+1], dtype=dtype)
          ############################################################################
        #                             END OF YOUR CODE                             #
        ############################################################################
    
        # When using dropout we need to pass a dropout_param dictionary to each
        # dropout layer so that the layer knows the dropout probability and the mode
        # (train / test). You can pass the same dropout_param to each dropout layer.
        self.dropout_param = {}
        if self.use_dropout:
          self.dropout_param = {'mode': 'train', 'p': dropout}
          if seed is not None:
            self.dropout_param['seed'] = seed
        
        # With batch normalization we need to keep track of running means and
        # variances, so we need to pass a special bn_param object to each batch
        # normalization layer. You should pass self.bn_params[0] to the forward pass
        # of the first batch normalization layer, self.bn_params[1] to the forward
        # pass of the second batch normalization layer, etc.
        self.bn_params = []
        if self.use_batchnorm:
          self.bn_params = [{'mode': 'train'} for i in xrange(self.num_layers - 1)]
        
        # Cast all parameters to the correct datatype
        for k, v in self.params.iteritems():
          self.params[k] = v.astype(dtype)
    
    
      def loss(self, X, y=None):
        """
        Compute loss and gradient for the fully-connected net.
    
        Input / output: Same as TwoLayerNet above.
        """
        X = X.astype(self.dtype)
        mode = 'test' if y is None else 'train'
    
        # Set train/test mode for batchnorm params and dropout param since they
        # behave differently during training and testing.
        if self.dropout_param is not None:
          self.dropout_param['mode'] = mode   
        if self.use_batchnorm:
          for bn_param in self.bn_params:
            bn_param['mode'] = mode             #?????
    
        scores = None
        ############################################################################
        # TODO: Implement the forward pass for the fully-connected net, computing  #
        # the class scores for X and storing them in the scores variable.          #
        #                                                                          #
        # When using dropout, you'll need to pass self.dropout_param to each       #
        # dropout forward pass.                                                    #
        #                                                                          #
        # When using batch normalization, you'll need to pass self.bn_params[0] to #
        # the forward pass for the first batch normalization layer, pass           #
        # self.bn_params[1] to the forward pass for the second batch normalization #
        # layer, etc.                                                              #
        ############################################################################
        h, cache1, cache2, cache3, cache4, bn, out = {}, {}, {}, {}, {}, {}, {}
        out[0] = X
        for i in range(self.num_layers-2):
          W, b = self.params['W' + str(i + 1)], self.params['b'+str(i+1)]
          if self.use_batcnorm:
            gamma, beta = self.params['gamma' + str(i + 1)], self.params['beta' + str(i + 1)]
            h[i], cache1[i] = affine_forward(out[i], W, b)
            bn[i], cache2[i] = batchnorm_forward(h[i], gamma, beta, bn_params)
            out[i+1], cache3[i] = relu_forward(bn[i])
            if self.use_dropout:
              out[i+1], cache4[i] = dropout_forward(out[i+1], self.dropout_param)
          else:
            out[i+1], cache3[i] = affine_relu_forward(out[i], W, b)
            if self.use_dropout:
              out[i+1], cache4[i] = dropout_forward(out[i+1], self.dropout_param)
        W, b = self.params['W' + str(self.num_layers)], self.params['b' + str(self.num_layers)]
        scores, cache = affine_forward(out[self.num_layers-1], W, b)
        ############################################################################
        #                             END OF YOUR CODE                             #
        ############################################################################
    
        # If test mode return early
        if mode == 'test':
          return scores
    
        loss, grads = 0.0, {}
        ############################################################################
        # TODO: Implement the backward pass for the fully-connected net. Store the #
        # loss in the loss variable and gradients in the grads dictionary. Compute #
        # data loss using softmax, and make sure that grads[k] holds the gradients #
        # for self.params[k]. Don't forget to add L2 regularization!               #
        #                                                                          #
        # When using batch normalization, you don't need to regularize the scale   #
        # and shift parameters.                                                    #
        #                                                                          #
        # NOTE: To ensure that your implementation matches ours and you pass the   #
        # automated tests, make sure that your L2 regularization includes a factor #
        # of 0.5 to simplify the expression for the gradient.                      #
        ############################################################################
        data_loss, dscores = softmax_loss(scores, y)
        reg_loss = 0
        for i in range(self.num_layers):
          W = self.params['W' + str(i + 1)]
          reg_loss += 0.5 * self.reg * (np.sum(np.square(W)))
        loss = data_loss + reg_loss
    
        dout, dbn, dh, ddrop = {}, {}, {}, {}
        t = self.num_layers - 1
        dout[t], grad['W'+str(t+1)], grad['b'+str(t+1)] = affine_backward(dscores, cache)
        for i in range(t):
          if self.use_batcnorm:
            if self.use_dropout:
              dout[t-i] = dropout_backward(dout[t-i], cache4[t-1-i])
            bn[t-1-i] = relu_backward(dout[t-i], cache3[t-1-i])
            dh[t-1-i], grad['gamma'+ str(t-i)], grad['beta'+ str(t-i)] = batchnorm_backward_alt(bn[t-1-i], cache2[t-1-i])
            dout[t-1-i], grad['W'+str(t-i)], grad['b'+str(t-i)] = affine_backward(dh[t-1-i], cache1[t-1-i])
          else:
            if self.use_dropout:
              dout[t - i] = dropout_backward(dout[t - i], cache4[t - 1 - i])
            dout[t - 1 - i], grads['W' + str(t - i)], grads['b' + str(t - i)] = affine_relu_backward(dout[t - i], cache3[t - 1 - i])
            for i in range(self.num_layers):
              grad['W'+str(i+1)] = grad['W'+str(i+1)] + self.reg * self.params['W' + str(i+1)]
        ############################################################################
        #                             END OF YOUR CODE                             #
        ############################################################################
    
        return loss, grads
    
    

    몇 개의 ipynb: fc,batchnorm,dropout가 정확한지 측정
    참조:https://blog.csdn.net/QFire/article/details/77971749전체 구조가 완전하다https://www.cnblogs.com/daihengchen/p/5770129.html layers.py 부분 코드 전체

    좋은 웹페이지 즐겨찾기