Group Projects

Groups

You will be working in groups for your project assignment.


Project ideas

The first thing you will do as a group is to select a research project to pursue. We have compiled and listed several below and encourage you to select whatever interests you. If you prefer to design your own project, please do so. Keep in mind that if your group goes with their own project idea, your group will be expected to pitch the project to the instructors before pursuing it. You will need to collect initial feedback, respond to the feedback and work on a project that has been greenlit. We want to be sure that the level of difficulty and the workload you are attempting is in line with the other projects.

Project topic can be from one of the following areas (but not limited to): 2D/3D Animation/facial animation, lighting, lip sync, motion, vision based techniques in animation, cartoon filters, rendering, human computer interaction based systems.

Here is a list of few project ideas :

  1. Expression Database. The FERG research group has created a database of stylized face tests, with hundreds of entries. The development of the DB has never been completed, and it currently lacks an export utility as well as an advanced search function. An improved front end would greatly enhance the value of this resource, which is still being updated. We know of no other database that has comparable tests of stylized faces.

  2. Thresholding on Stylized/Real facial features. The scowling eyebrow is surprisingly subtle, on both stylized and realistic faces. What is the horizontal threshold relative to the eye that is the tipping point for frown/no frown? Does the angle of the eyebrow matter? How is this threshold different on stylized faces?

  3. Do wrinkles matter? Stylized characters are not designed to wrinkle when an expression is in progress, although their human counterparts do. Would adding wrinkles, like the frown lines or the horizontal folds of the worried brow, improve readability? Is the deepening of the nasolabial fold in smiling a special case, more critical than the other wrinkles?

  4. Beyond Cardinal expressions. There is a scientific consensus on the six cardinal expressions. It has been suggested that certain other states of mind may also have a universally recognized pattern, but so far no one has proposed a convincing solution. Can you design a stylized face to effectively portray 1) Confusion, 2) Boredom, or 3) Embarrassment?

  5. Learn from videos. Use Computer Vision and ML techniques to learn attributes (pose, gaze, emotion) from human video data and transfer to cartoons.

  6. Perception variance in different environments. Is the same expression perceived differently in VR versus conventional CG? How would you test that?

  7. Expressing emotion through eyes. How different emotions can be expressed through just the eye movement in humans and stylized characters? Use computer vision and ML techniques to train models to predict the emotions and run a user study to validate the results.

  8. What makes a network recognize an emotion The rise of AI has been accompanied by a much less well-known tool known as EAI – Explainable Artificial Intelligence. The FERG research group has created an AI tool to recognize the cardinal expressions from photographic input, but we have not extracted any information as to what criteria our neural network is using for its decisions. There have been some recently published strategies for querying a neural network to unearth that information. Can you use our existing data to discover, for example, what areas of the face the neutral network are being used to recognize the various cardinal expressions?

  9. Stylized Character Typology Project.  There is a wide range of stylizations that have been successfully employed in recent animated features.  How would you create a library of stylized characters using screen shots from animated films. One way would be to attempt to categorize stylized faces from less to more, based on an as-yet-to-be-determined set of criteria, for instance. The project could include creating a library of cardinals taken from screen shots for each character we decide to categorize.

  10. Subtle Expression for Stylized Characters.  So far we’ve tested and had success bringing our various characters to ≥85% for the cardinal expressions, but only in their most intense form.  If there is an optimal pose for smiling, sadness, anger, surprise, disgust, and fear that will consistently test at that level when the pose is the most extreme, can we obtain the same strong results with an “optimal” more subtle version of the same expressions?

  11. Universally recognized facial expressions. Can we define “rules” for universally-recognized cat/dog/pig designs with extremely minimal shape/line information? What are the shapes that trigger recognition of the cardinals when the face is subject to extreme stylization, and how would you design a research study on this topic?

  12. Non- anatomical patterns for Facial Expression design. Virtually all expression research is focused on strictly anatomy-based descriptive systems (FACS) and poses.  It’s clear that non-anatomical patterns can also be effective for communicating facial expressions, but it’s much less clear how to describe and test those patterns. How could you use MT to answer these questions? One aspect of the research could be to design a user study that would help you discover the limits of these shapes.
    Please consider how artists would use the results of your research.

  13. Perceptual differences between stereo and monocular vision. Most perceptual studies on human expression recognition are performed on monocular static images or video.  In real life, people perceive expressions with both eyes using stereoscopic vision. How would you investigate perceptual differences between stereo and monocular vision by asking the question "Is the perception of stylized character expressions changed by stereoscopic vision?"

  14. Employing Facial Expression recognition to improve FE training. Can real time expression recognition tool kits help train people to make more readable expressions with their own faces?  In other words, can expression recognition software make you a better actor or communicator?  How would expression recognition improve FE training and can you identify how it would improve outcomes as compared to simple visual feedback?

  15. Automatic Expression transfer between characters.  There is a vast set of existing literature on automatic expression transfer for human and stylized characters. After reviewing this literature, can you propose and implement a proof-of-concept system that transfers expressions between characters? Please define and evaluate the effectiveness of your implementation as compared to artist created transfers.

  16. Storyboarding automated Expressions from pre-existing dialog. How would you create an automated expression storyboarding from pre-existing dialog?  Review the literature, propose and implement a proof-of- concept system that automates expression storyboarding from either written or spoken dialogue or both.  Use a natural language processing toolkit such as the Python NLTK or manually mark up the dialog text.  Evaluate the effectiveness of your system as compared to an artist created storyboard or story reel.

  17. Automated interactive FE blocking system for multiple characters in Pre-Production. How could you design a system that would pair facial expressions with specific acting and poses related to the interaction of characters as part of the “blocking in” phase of a 3D story animatic. How could you implement the system so that the personality of each of the characters could be most effectively “directed” by the animator without the use of either written or spoken dialog?

  18. What is a minimum character design that would work for 90% cardinals? So far we’ve tested characters whose level of stylization is relatively realistic – with mouth/lip/eyebrow/eyelid/eye configuration that is closely based on the human face. How much can we remove/simplify/stylize and still get 90% results? Can we lose the lips? The eyebrows? The nose?

  19. What is the effect of the hyper-alert eye? This is the character design like Mickey w/ lots of eye white above the iris, even when there is no emotion present. How do we determine when it is neutral/surprised/afraid/sad?

  20. How do we judge fake smiles? We suspect that there is wide agreement on which smiles are fake vs. real. How do you design a test to ask that question in a way that does not bias the result (do you tell people in advance they are judging smiles?), and how do you determine what elements make a smile look sincere vs. insincere?

  21. Are stylized faces perceived faster? Does the eye-tracking pattern differ? Since most stylized faces are simplified compared to real faces, and the most crucial elements are often made larger and clearer (iris, eye white, mouth), it’s possible that we perceive expressions on them more quickly than real faces. It’s also possible that we eye track a stylized face differently, since the information is more concentrated and there is less distraction.