Deep Learning Essentials
上QQ阅读APP看书,第一时间看更新

Data representation

In this section, we will look at core data structures and representations used most commonly across different linear algebra tasks. This is not meant to be a comprehensive list at all but only serves to highlight some of the prominent representations useful for understanding deep learning concepts:

  • Vectors: One of the most fundamental representations in linear algebra is a vector. A vector can be defined as an array of objects, or more specifically an array of numbers that preserves the ordering of the numbers. Each number can be accessed in a vector based on its indexed location. For example, consider a vector x containing seven days a week encoded from 1 to 7, where 1 represents Sunday and 7 represents Saturday. Using this notation, a particular day of the week, say Wednesday, can be directly accessed from the vector as x [4]:
  • Matrices: These are a two-dimensional representation of numbers, or basically a vector of vectors. Each matrix, m, is composed of a certain number of rows, r, and a specified number of columns, c. Each of i rows, where 1 <= i <= r, is a vector of c numbers. Each of the j columns, where 1 <=j <= c, is also a vector of r numbers. Matrices are a particularly useful representation when we are working with images. Though real-world images are three-dimensional in nature, most of the computer vision problems are focused on the two-dimensional presentation of images. As such, a matrix representation is an intuitive representation of images:
  • Identity matrices: An identity matrix is defined as a matrix which, when multiplied with a vector, does not change the vector. Typically, an identity matrix has all elements as 0 except on its main diagonal, which is all 1s: