Consider random variables $X$ and $Y$, to measure how much they change together, we use covariance:

Firstly, we center them at their mean, $X'=X-\bar{X}$, $Y'=Y-\bar{Y}$, and find the expectation of the product. The product is positive if both of them move beyond (both $X',Y'$ positive) or below (both $X',Y'$ negative) the mean. But negative if they are at different side about the mean value. Thus the measure, named covariance, defined as

For random vectors $X$ and $Y$, the covariance is a matrix whose $ij$-th element is the correlation of the $i$-th element in $X$ and $j$-th element in $Y$, i.e. $Cov(X_i,Y_j)$. Precisely,

Covariance has the property that, $Cov(X,X)=Var(X)$.

The magnitude of covariance does not carry much interpretation. Thus we usually normalize the covariance into $[-1,1]$ and produce the Pearson’s coefficient:

The Pearson’s coefficient measures the strength of linear dependence between random variables $X$ and $Y$.