Data analysis in modern times involves working with large volumes of data, including time-series data. This type of data is characterized by its high dimensionality, enormous volume, and the presence of both noise and redundant features. However, the “curse of dimensionality” often causes issues for learning approaches, which can fail to capture the temporal dependencies present in time-series data. To address this problem, it is essential to reduce dimensionality while preserving the intrinsic properties of temporal dependencies. This will help to avoid lower learning and predictive performances. This study presents twelve different dimensionality reduction algorithms that are specifically suited for working with time-series data and fall into different categories, such as supervision, linearity, time and memory complexity, hyper-parameters, and drawbacks.