Many modern applications, such as machine learning and statistics, require solving large-dimensional optimization problems. First-order methods, such as a gradient method and a proximal point method, are widely used to solve such large-scale problems, since their computational cost per iteration mildly depends on the problem dimension. However, they suffer from slow convergence rates, compared to second-order methods such as Newton's method. Therefore, accelerating first-order methods has received a great interest, and this led to the development of a conjugate gradient method, a heavy-ball method, and Nesterov's fast gradient method, which we will review in this talk. This talk will then present recently proposed accelerated first-order methods, named optimized gradient method (OGM) and OGM-G.