The problem addressed by dictionary learning (DL) is the representation of data as a sparselinear combination of columns of a matrix called dictionary. Both the dictionary and the sparserepresentations are learned from the data. We show how DL can be employed in the imputation ofmultivariate time series. We use a structured dictionary, which is comprised of one block for eachtime series and a common block for all the time series. The size of each block and the sparsity level ofthe representation are selected by using information theoretic criteria. The objective function used inlearning is designed to minimize either the sum of the squared errors or the sum of the magnitudesof the errors. We propose dimensionality reduction techniques for the case of high-dimensionaltime series. For demonstrating how the new algorithms can be used in practical applications, weconduct a large set of experiments on five real-life data sets. The missing data (MD) are simulatedaccording to various scenarios where both the percentage of MD and the length of the sequences ofMD are considered. This allows us to identify the situations in which the novel DL-based methodsare superior to the existing methods.