
Published by Avneet Singh, Item Supervisor and Sisi Jin, UX Designer, Google PI, and Lance Carr, Partner
At I/O 2023, Google introduced Task Gameface, an open-source, hands-free video gaming ‘mouse’ making it possible for individuals to manage a computer system’s cursor utilizing their head motion and facial gestures. Individuals can raise their eyebrows to click and drag, or open their mouth to move the cursor, making video gaming more available.
The job was influenced by the story of quadriplegic computer game banner Lance Carr, who copes with muscular dystrophy, a progressive illness that deteriorates muscles. And we worked together with Lance to bring Task Gameface to life. The complete story behind the item is offered on the Google Keyword blog site here
It’s been an exceptionally intriguing experience to consider how a mouse cursor can be managed in such an unique method. We performed numerous experiments and discovered head motion and facial expressions can be a special method to set the mouse cursor. MediaPipe’s brand-new Face Landmarks Detection API with blendshape choice made this possible as it enables any designer to take advantage of 478 3-dimensional face landmarks and 52 blendshape ratings (coefficients representing facial expression) to presume comprehensive facial surface areas in real-time.
Item Construct and Particulars
In this post, we share technical information of how we developed Task Gamefaceand the numerous open source innovations we leveraged to develop the interesting item!
Utilizing head motion to move the mouse cursor
![]() |
Caption: Controlling head motion to move mouse cursors and personalizing cursor speed to adjust to various screen resolutions. |
Through this job, we checked out the principle of utilizing the head motion to be able to move the mouse cursor. We concentrated on the forehead and iris as our 2 landmark places. Both forehead and iris landmarks are understood for their stability. Nevertheless, Lance saw that the cursor didn’t work well while utilizing the iris landmark. The factor was that the iris might move a little when individuals blink, triggering the cursor to move unintendedly. For that reason, we chose to utilize the forehead landmark as a default tracking choice.
There are circumstances where individuals might experience difficulties when moving their head in specific instructions. For instance, Lance can move his head quicker to the right than left. To resolve this concern, we presented an easy to use service: different cursor speed change for each instructions. This function enables individuals to tailor the cursor’s motion according to their choices, helping with smoother and more comfy navigation.
We desired the experience to be as smooth as a hand held controller. Jitteriness of the mouse cursor is among the significant issues we wished to conquer. The look of cursor jittering is affected by numerous elements, consisting of the user setup, video camera, sound, and lighting conditions. We executed an adjustable cursor smoothing function to enable users the benefit of quickly tweak this function to finest match their particular setup.
Utilizing facial expressions to carry out mouse actions and keyboard press
Really early on, among our main insights was that individuals have differing convenience levels altering facial expressions. A gesture that comes quickly to one user might be very hard for another to do intentionally. For example, Lance can move his eyebrows individually with ease while the remainder of the group had a hard time to match Lance’s ability. For this reason, we chose to develop a performance for individuals to tailor which expressions they utilized to manage the mouse.
![]() |
Caption: Utilizing facial expressions to manage mouse |
Think About it as a customized binding of a gesture to a mouse action. When pondering about which mouse actions need to the item cover, we attempted to record typical situations such as left and ideal click to scrolling up and down. Nevertheless, utilizing the head to manage mouse cursor motion is a various experience than the standard way. We wished to provide the users the choice to reset the mouse cursor to the center of the screen utilizing a facial gesture too.
![]() |
Caption: Utilizing facial expressions to manage keyboard |
The most current release of MediaPipe Face Landmarks Detection brings an amazing addition: blendshapes output. With this improvement, the API creates 52 face blendshape worths which represent the expressiveness of 52 facial gestures such as raising left eyebrow or mouth opening. These worths can be efficiently mapped to manage a vast array of functions, providing users broadened possibilities for modification and adjustment.
We have actually had the ability to extend the very same performance and include the choice for keyboard binding too. This assists utilize their facial gestures to likewise push some keyboard type in a comparable binding style.
Set Gesture Size to see when to activate a mouse/keyboard action
![]() |
Caption: Set the gesture size to activate an action |
While checking the software application, we discovered that facial expressions were basically noticable by each people, so we have actually included the concept of a gesture size, which enables individuals to manage the degree to which they require to gesture to activate a mouse action. Blendshapes coefficients were useful here and various users can now set various limits on each particular expression and this assists them tailor the experience to their convenience.
Keeping the video camera feed offered
Another essential insight we got from Lance was players frequently have several video cameras. For our device discovering designs to run efficiently, it’s finest to have a cam pointing straight to the user’s confront with good lighting. So we have actually included the capability for the user to pick the right video camera to assist frame them and provide the most optimum efficiency.
Our item’s interface integrates a live video camera feed, supplying users with real-time exposure of their head motions and gestures. This function brings numerous benefits. First of all, users can set limits better by straight observing their own motions. The graph allows notified choices on proper limit worths. Furthermore, the live video camera feed boosts users’ understanding of various gestures as they aesthetically associate their motions with the matching actions in the application. In general, the video camera feed considerably boosts the user experience, helping with precise limit settings and a much deeper understanding of gestures.
Item Product Packaging
Our next action was to develop the capability to manage the mouse and keyboard utilizing our custom-made specified reasoning. To make it possible for mouse and keyboard control within our Python application, we use 2 libraries: PyAutoGUI for mouse control and PyDirectInput for keyboard control. PyAutoGUI is picked for its robust mouse control abilities, enabling us to replicate mouse motions, clicks, and other actions. On the other hand, we take advantage of PyDirectInput for keyboard control as it uses improved compatibility with numerous applications, consisting of video games and those depending on DirectX.
For our application product packaging, we utilized PyInstaller to turn our Python-based application into an executable, making it simpler for users to run our software application without the requirement for setting up Python or extra dependences. PyInstaller supplies a trustworthy and effective methods to disperse our application, guaranteeing a smooth user experience.
The item presents an unique kind element to engage users in an essential function like dealing with the mouse cursor. Making the item and its UI instinctive and simple to follow was a leading concern for our style and engineering group. We worked carefully with Lance to include his feedback into our UX factors to consider, and we discovered CustomtKinter had the ability to deal with the majority of our UI factors to consider in Python.
We’re thrilled to see the capacity of Task GameFace and can’t wait on designers and business to take advantage of it to develop brand-new experiences. The code for GameFace is open sourced on Github here
Recognitions
We want to acknowledge the vital contributions of the following individuals to this job: Lance Carr, David Hewlett, Laurence Moroney, Khanh LeViet, Glenn Cameron, Edwina Priest, Joe Fry, Feihong Chen, Advantage Panichprecha, Dome Seelapun, Kim Nomrak, Pear Jaionnom, Lloyd Hightower