Eulerian video magnification: a review

Haider Ismael Shahadi, Hayder J. Albattat, Zaid J. Al-allaq, Ahmed T. Thahab


Many important subtle changes in the environment are invisible to the naked human eyes. These subtle changes occur because of colour variations, such as blood flow in a human face that leads to face colour change, or motion variations, such as vena movement under human skin and vibration of buildings. The human eye requires optical microscopes to detect these variations. Alternatively, new technologies, such as high-speed imagery and computer processing, can be used to detect these variations. These computerised microscopes depend on computation rather than optical amplification to amplify subtle colour and motion changes in videos. The most popular technique to achieve computation-based microscope is the Eulerian video magnification (EVM). However, several challenges in EVM still need to be solved to meet the requirements of real time and video quality. This paper presents a comprehensive study of EVM methods and reviews the related literature. The strengths and drawbacks of existing works are discussed, and the important research fields and challenges in the area of EVM are concluded.


Eulerian video magnification; Video processing;Temporal filters;Spatial filters;Pyramid decomposition


M. Z. Poh, D. J. McDuff, and R. W. Picard, “Non-contact, automated cardiac pulse measurements using video imaging and blind source separation,” Opt. Express, vol. 18, no. 10, p. 10762, 2010.

W. Verkruysse, L. O. Svaasand, and J. S. Nelson, “Remote plethysmographic imaging using ambient light,” Opt. Express, vol. 16, no. 26, p. 21434, 2008.

C. Liu, A. Torralba, W. T. Freeman, F. Durand, and E. H. Adelson, “Motion magnification,” ACM Trans. Graph., vol. 24, no. 3, p. 519, 2005.

J. Wang, S. M. Drucker, M. Agrawala, and M. F. Cohen, “The cartoon animation filter,” ACM Trans. Graph., vol. 25, no. 3, pp. 1169–1173, 2006.

B. D. Lucas and T. Kanade, “An Iterative Image Registration Technique with an Application to Stereo Vision.,” Robotics, vol. 81, no. September, pp. 674–679, 1981.

B. K. P. Horn and B. G. Schunck, “Determining optical flow,” Artif. Intell., vol. 17, no. 1–3, pp. 185–203, 1981.

H. Y. Wu, M. Rubinstein, E. Shih, J. Guttag, F. Durand, and W. Freeman, “Eulerian video magnification for revealing subtle changes in the world,” ACM Trans. Graph., vol. 31, no. 4, pp. 1–8, 2012.

M. Rubinstein, “Analysis and Visualization of Temporal Variations in Video,” Massachusetts Institute of Technology, 2014.

H.-Y. Wu, “Eulerian video processing and medical applications,” Massachusetts Institute of Technology, 2012.

T. Xue, M. Rubinstein, N. Wadhwa, A. Levin, F. Durand, and W. T. Freeman, “Refraction wiggles for measuring fluid depth and velocity from video,” Lect. Notes Comput. Sci. (including Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinformatics), vol. 8691 LNCS, no. PART 3, pp. 767–782, 2014.

A. C. Le Ngo, Y. H. Oh, R. C. W. Phan, and J. See, “Eulerian emotion magnification for subtle expression recognition,” in ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings, 2016, vol. 2016-May, pp. 1243–1247.

P. B. Chambino, “Android-based implementation of Eulerian Video Magnification for vital signs monitoring,” 2013.

D. A. Ryan, “Visible imaging of global MHD on MAST,” IEEE Trans. Plasma Sci., vol. 42, no. 10, pp. 2556–2557, 2014.

A. Davis, M. Rubinstein, N. Wadhwa, G. J. Mysore, F. Durand, and W. T. Freeman, “The visual microphone: passive recovery of sound from video,” ACM Trans. Graph., vol. 33, no. 4, p. 79.

C. Ordóñez, C. Cabo, A. Menéndez, and A. Bello, “Detection of human vital signs in hazardous environments by means of video magnification,” PLoS One, vol. 13, no. 4, pp. 1–15, 2018.

M. Kayani, M. M. Riaz, A. Ghafoor, and N. Iltaf, “An efficient Eulerian video magnification technique for micro-biology applications,” Radioengineering, vol. 26, no. 1, pp. 316–322, 2017.

A. Sarra, Z. Mao, C. Niezrecki, and P. Poozesh, “Vibration-based damage detection in wind turbine blades using Phase-based Motion Estimation and motion magni fi cation,” J. Sound Vib., vol. 421, pp. 300–318, 2018.

J. G. Chen, N. Wadhwa, Y. J. Cha, F. Durand, W. T. Freeman, and O. Buyukozturk, “Modal identification of simple structures with high-speed video using motion magnification,” J. Sound Vib., vol. 345, pp. 58–71, 2015.

X. He, R. A. Goubran, and X. P. Liu, “Wrist pulse measurement and analysis using Eulerian video magnification,” in 3rd IEEE EMBS International Conference on Biomedical and Health Informatics, BHI 2016, 2016, pp. 41–44.

L. Cattani et al., “Monitoring infants by automatic video processing: A unified approach to motion analysis,” Comput. Biol. Med., vol. 80, pp. 158–165, 2017.

X. He, R. A. Goubran, I. Fellow, X. P. Liu, and I. Senior, “Using Eulerian Video Magnification Framework to Measure Pulse Transit Time,” in 2014 IEEE International Symposium on Medical Measurements and Applications (MeMeA), 2014, pp. 2–5.

Y. Zhang and F. Shang, “Noncontact Extraction of Breathing Waveform,” Int. Power, Electron. Mater. Eng. Conf. (IPEMEC 2015) Noncontact, no. Ipemec, pp. 788–793, 2015.

U. Rubins, J. Spigulis, and A. Miscuks, “Application of colour magnification technique for revealing skin microcirculation changes under regional anaesthetic input,” in Biophotonics—Riga 2013, 2013, vol. 9032, no. November, p. 903203.

J. H. G. M. Klaessens, M. van den Born, A. van der Veen, J. Sikkens-van de Kraats, F. A. M. van den Dungen, and R. M. Verdaasdonk, “Development of a baby friendly non-contact method for measuring vital signs: first results of clinical measurements in an open incubator at a neonatal intensive care unit,” in Advanced Biomedical and Clinical Diagnostic Systems XII, 2014, vol. 8935, p. 89351P.

M. Kamphuis, F. de Jongh, J. Hilderink, R. Vaartjes, J. Goorhuis, and B. Thio, “Analysis of video-amplified body parameters in pediatric sleep,” Eur. Respir. J., vol. 44, no. Suppl 58, p. P3274, 2014.

G. Balakrishnan, F. Durand, and J. Guttag, “Detecting pulse from head motions in video,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2013, pp. 3430–3437.

M. Janatka et al., “Examining in vivo tympanic membrane mobility using smart phone video-otoscopy and phase-based Eulerian video magnification,” Med. Imaging 2017 Comput. Diagnosis, vol. 10134, no. Cmic, pp. 1–7, 2017.

N. Wadhwa, M. Rubinstein, F. Durand, and W. T. Freeman, “Phase-based video motion processing,” ACM Trans. Graph., vol. 32, no. 4, p. 80, 2013.

J. Portilla and E. P. Simoncelli, “A Parametric Texture Model Based on Joint Statistics of Complex Wavelet Coefficients,” Int. J. Comput. Vis., vol. 40, no. 1, pp. 49–71, 2000.

E. P. Simoncelli, W. T. Freeman, E. H. Adelson, and D. J. Heeger, “Shiftable Multiscale Transforms,” IEEE Trans. Inf. THEORY, vol. 38, no. 2, pp. 587–607, 1992.

D. J. Fleet and A. D. Jepson, “Computation of component image velocity from local phase information,” Int. J. Comput. Vis., vol. 5, no. 1, pp. 77–104, 1990.

T. Gautama and M. M. Van Hulle, “A phase-based approach to the estimation of the optical flow field using spatial filtering,” IEEE Trans. Neural Networks, vol. 13, no. 5, pp. 1127–1136.

B. Alibouch, A. Radgui, M. Rziza, and D. Aboutajdine, “Optical flow estimation on omnidirectional images: An adapted phase based method,” in International Conference on Image and Signal Processing(ICISP 2012), 2012, vol. 7340 LNCS, pp. 468–475.

N. Wadhwa, M. Rubinstein, F. Durand, and W. T. Freeman, “Riesz pyramids for fast phase-based video magnification,” in Computational Photography (ICCP), 2014 IEEE International Conference on, 2014, pp. 1–10.

N. Wadhwa, “Revealing and analyzing imperceptible deviations in images and videos,” Massachusetts Institute of Technology, 2016.

L. Liu, L. Lu, J. Luo, J. Zhang, and X. Chen, “Enhanced Eulerian video magnification,” in Image and Signal Processing (CISP), 2014 7th International Congress on, 2014, pp. 50–54.

A. Al-Naji, S. H. Lee, and J. Chahl, “An efficient motion magnification system for real-time applications,” Mach. Vis. Appl., vol. 29, no. 4, pp. 585–600, 2018.

P. J. BURT and E. H. ADELSON, “The Laplacian Pyramid as a Compact Image Code,” IEEE Trans. Commun., vol. 31, no. 4, pp. 532–540, 1983.

E. H. Adelson, C. H. Anderson, J. R. Bergen, P. Burt, and J. Ogden, “Pyramid methods in image processing,” RCA Eng., vol. 29, no. 6, pp. 33–41, 1984.

W. Freeman, E. H. Adelson, and D. Heeger, “Motion without movement,” in Computer Graphics (ACM), 1991, vol. 25, no. 4, pp. 27–30.

J.-Y. Bouguet, “Pyramidal implementation of the affine lucas kanade feature tracker description of the algorithm,” Intel Corp., vol. 5, no. 1–10, p. 4, 2001.

P. G. Pansare and M. P. Dale, “Magnification Of Wrist Video For Heart Rate Measurement,” vol. 5, no. 1, pp. 111–114, 2017.

A. J. Rojas, A. Fuentes, H. O. Garcés, L. E. Arias, J. Cuevas, and J. Pino, “Soot propensity by image magnification and artificial intelligence,” Fuel, vol. 225, no. September 2017, pp. 256–265, 2018.

S. Bharadwaj, S. Member, T. I. Dhamecha, and S. Member, “Face Anti-spoofing via Motion Magnification and Multifeature Videolet Aggregation,” in IEEE Conference on Computer Vision and Pattern Recognition Workshops, 2014, no. ii, pp. 1–12.

W. Jiang, T.-Z. Shen, J. Zhang, Y. Hu, and X.-Y. Wang, “Gabor wavelets for image processing,” in Computing, Communication, Control, and Management, 2008. CCCM’08. ISECS International Colloquium on, 2008, vol. 1, pp. 110–114.

D. Barina, “Gabor Wavelets In Image Processing,” in arXiv preprint arXiv:1602.03308, 2016.

W. Freeman, T. and E. Adelson, H., “The Design and Use of Steerable Filters,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 13, no. 9, pp. 891–906, 1991.

E. P. Simoncelli and W. T. Freeman, “The steerable pyramid: A flexible architecture for multi-scale derivative computation,” in Image Processing, 1995. Proceedings., International Conference on, 1995, vol. 3, pp. 444–447.

W. T. Freeman and E. H. Adelson, “Steerable filters for early vision, image analysis, and wavelet decomposition,” in [1990] Proceedings Third International Conference on Computer Vision, 1990, pp. 406–415.

J. C. Wachel, S. J. Morton, and K. E. Atkins, “Piping Vibration Analysis.,” in Proceedings of the 19th turbomachinery symposium, 1990, pp. 119–134.

J. G. Chen, N. Wadhwa, Y.-J. Cha, F. Durand, W. T. Freeman, and O. Buyukozturk, “Structural modal identification through high speed camera video: Motion magnification,” in Topics in Modal Analysis I, Volume 7, 2014, pp. 191–197.

J. G. Chen, N. Wadhwa, F. Durand, W. T. Freeman, and O. Buyukozturk, “Developments with motion magnification for structural modal identification through camera video,” in Dynamics of Civil Structures, Volume 2, Springer, 2015, pp. 49–57.

N. Wadhwa et al., “Eulerian Video Magnification and Analysis,” Commun. ACM, vol. 60, pp. 87–95, 2017.

M. Rubinstein, C. Liu, P. Sand, F. Durand, and W. T. Freeman, “Motion denoising with application to time-lapse photography,” in Computer Vision and Pattern Recognition (CVPR), 2011 IEEE Conference on, 2011, pp. 313–320.

J. Bai, A. Agarwala, M. Agrawala, and R. Ramamoorthi, “Selectively de-animating video.,” ACM Trans. Graph., vol. 31, no. 4, pp. 61–66, 2012.

Total views : 25 times


  • There are currently no refbacks.

Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.

shopify stats IJEECS visitor statistics