Hi, I’ve been doing some extensive research for college about eye tracking and I am quite interested in gazepointer. I wanted to ask how does gaze pointer go from gaze vector/coordinates to screen coordinates using just a monocular webcam? Does it use polynomial regression? If there any form of whitepaper on gazepointer or any other gaze product that discusses this? Thank you