Text
May 5th
Today was the last Technical Animation class! To conclude the semester, we went through a bunch of upcoming SIGGRAPH papers for this summer. First, we looked at “AMP: Adversarial Motion Priors for Stylized Physics-Based Character Control,” which created a framework that enables a physically simulated character to solve challenging tasks while adopting stylistic behaviors specified by unstructured motion data. They utilize GANs, which, unlike typical supervised learning, generate examples from random input and feed them into a discriminator that tries to learn a model that can discriminate between generated and real examples. Their system takes in a motion capture dataset, from which it generates an adversarial motion prior, represented as a sequence of states. Then, it calculates a reward based on the motion prior and goal, and the policy tries to maximize this reward. The demos look awesome! They’re close to the references, but perhaps even a little better -- it seems to smooth out some of the discrepancies in the ground truth. This was especially evident in the crawling demo, where the reference wasn’t properly making contact with the ground, but the simulation did so very well.
Next, we discussed “Dynamic Upsampling of Smoke through Dictionary-Based Learning,” which simulates high-resolution smoke based off of a low-resolution smoke input. The high-resolution smoke manages to match the effects of the low-resolution smoke while adding much more detail. The demos turned out great and look very believable. The only part I didn’t find convincing was how the smoke emerged from the pipes -- it seemed too perfectly cylindrical at the base, but maybe this is actually how smoke looks coming out of a cylinder; I would not know, honestly.
Then, we took a look at “Fire in Paradise: Mesoscale Simulation of Wildfires,” a really cool application that simulates the spread of forest fires. It takes into consideration a plethora of parameters, like the types of trees (e.g., conifer vs. deciduous) and how they burn differently, different ecosystems with varying levels of forest cover, temperature, windspeed, oxygen concentrations, and more. The system also allows the user to interactively extinguish fire, and they can cut out cohorts of trees to see how effective it is in blocking off the spread of a wildfire.
We also looked at “Kelvin Transformations for Simulations on Infinite Domains,” which proposes a general technique to transform a partial differential equation on an unbounded domain to a partial differential equation problem on a bounded domain, for which they utilize the Kelvin Transform, which inverts the distance from the origin. Their results are pretty good, although the turtle swimming through the current didn’t look all that believable -- the current lines didn’t seem to adjust quite right as the turtle tilted up and down. The water ripples around the rabbit did look great though!
Lastly, we went through “Knitting 4D Garments with Elasticity Controlled for Body Motion.” This paper presents a new computational pipeline for designing and fabricating 4D garments as knitwear that considers comfort during body motion, which they accomplish by controlling the elasticity distribution with the goals of reducing uncomfortable pressure and unwanted sliding caused by body motion. Yet another awesome application! I’m interested to see where this goes.
Overall, this has been such an amazing semester. I wasn’t even aware of this type of work before stumbling upon this class, and I finally have found something that smashes together my two favorite things -- animation and coding. Thank you, Professor Pollard, for such a wonderful semester! I’ve learned so much, and I hope to continue diving deeper into computer graphics and technical animation.
0 notes
Text
May 3rd
Last week of class! Today’s class, we called back to last week when we learned about “A Scalable Approach to Control Diverse Behaviors for Physically Simulated Characters,” which takes in a ton of mocap data, clusters them, creates a dynamic controller for each, and then coalesces the individual controllers into one universal dynamic controller. “Fast and Flexible Multilegged Locomotion Using Learned Centroidal Dynamics” had a similar goal in that they wanted to create a controllable character. However, unlike the previous paper, this one did not aim to work off of a set of mocap data. Their main focus was creating dynamic locomotion patterns that are visually appealing and fast to compute. Their method involved inputting the desired direction and speed to an inverted pendulum to model the center of mass trajectory, which then feeds into a footstep planner. The footstep planner determines contact locations and timings/duration, which is used to compute CDM trajectory and forward dynamics. Finally, the CDM plan and contact forces are utilized to perform momentum-mapped IK, producing full-body motion. Their demo looked great! The torso rotation and lean really added to the believability. The walk to jog to run looked especially good, but the slow-down from the sprint was... funky -- you could clearly tell that the pendulum trajectory model was taking over in that motion, and it did not look very realistic applied to a human.
Next, we discussed “Online Control of Simulated Humanoids Using Particle Belief Propagation,” wherein a character is dynamically controlled in real time and responds to goals provided via a typical controller as well as constraints and interactivity on the screen. That seems like an immense amount of variables to operate off of, but they somehow managed to pull it off without any reference motion, training data, precomputation, or state machines, which is mind-boggling to me. Rather, they took a sampling approach, in which they simply took tons of samples. Unfortunately, the results looked rather... terrible? Granted, I am not sure what the intended application space is for this technology, so perhaps it is great for whatever it was designed for, but none of the motion was believable, to me. Still, it’s amazing that it’s all in real time.
0 notes
Text
Apr. 28th
Today’s class was on character animation. We took a deep dive into the history of character animation, starting with “Animation of Dynamic Legged Locomotion,” a paper written way back in 1991, which proposed the use of control algorithms to animate dynamic legged locomotion. They implemented computer animations of walking robots and some animals -- the kangaroos and ostriches actually looked pretty great! The motion was very believable, despite the models themselves looking like pool floaties. Their method was particularly appealing because of how much the control algorithms simplified the animation process.
Then, we took a look at SIMBICON: Simple Biped Locomotion Control, a simple control strategy that can be used to generate a large variety of gaits and styles in real-time, from walking to running, skipping, and hopping. The controllers can be authored using a small number of parameters, or the system can take motion capture data as input to construct them. The demo was good, although the characters’ torsos and hips appeared to be made of stone, which made for stiff- and awkward-looking movement. The barbell snatch looked alright, but that’s just because the spine is meant to be kept as straight as possible the entire time throughout that movement, anyway...
Next, we discussed “Generalized Biped Walking Control,” a paper that introduced a real-time control strategy for physically-simulated walking motions. It requires character- or motion-specific tuning, is robust to disturbances, and is simple to compute. Plus, it is very intuitive to the animator and makes for easy integration into the artistic process. The results were rather impressive, especially the demo of the person pulling a heavy box -- you could really feel the weight in the movement.
Lastly, we went through “A Scalable Approach to Control Diverse Behaviors for Physically Simulated Characters.” The method described in this paper takes as input a motion controller, which is then used to generate a library of reference motions. These motions get clustered, and a dynamic controller is constructed for each cluster, after which all the individual dynamic controllers are coalesced to create a universal dynamic controller that can generate any kind of motion from the library and more. The researchers demonstrated their technology by learning the motions produced by a motion graph constructed from motion capture data.
0 notes
Text
Apr. 26th
Today, we covered hand and full-body simulation research. First up was hand modeling and simulation using MRI by Jernej Barbic. This system takes in MRI images for multiple hand poses and generates a simulation-ready hand for one given subject. The process begins with the MRI and using optical scanners to obtain high-resolution skin geometry data, then segmenting the bones from the MRI, creating an accurate animation-ready bone rig, and lastly creating the simulation-ready hand soft tissue FEM model. They manually painted creases and skin folds onto the model and used spatially-varying materials for the three hand regions: front, back, and folds. The results looked believable movement-wise, and the ranges of motion for the various parts of the hand looked quite realistic. However, the skin texture was not as realistic, particularly the creases, which severely lacked depth. Though their work was a big step forward, it didn’t include some key features of the hand, like tendons and blood flow. We still have a long way to go before we can truly simulate hands realistically.
Next, we moved onto full-body simulation, starting with SCAPE: Shape Completion and Animation of People, a data-driven method for building a human shape model that captures pose- and shape-based deformations. However, we only briefly discussed this paper -- although this research was the first of its kind, it did not end up being widely adopted. Instead, SMPL: A Skinned Multi-Person Linear Model, which was released shortly after SCAPE, came to be much more popular, thanks to its ability to fit right into the standard animation pipeline. Any application that can deal with linear blend skinning can use this technology, making it easy to integrate. SMPL is a realistic 3D model of the human body that is based on skinning and blend shapes and is learned from thousands of 3D body scans. The demos appeared to capture a wide variety of body shapes very well -- it was very interesting seeing them modulate the parameters in real time and see the body type change realistically.
Then, we discussed SCANimate, a trainable framework that takes raw 3D scans of a clothed human and turns them into an animatable avatar. Avatars are driven by pose parameters and have realistic clothing that moves and deforms naturally. Their main goal was to solve the correspondence problem of where points on the clothing in the raw scan would map to in the model, which required the use of neural nets. The demos looked convincing, but it was suspicious that they only seemed to model form-fitting clothing. It would be interesting to know how this functions with flowy clothes.
We ended class with a quick look at Computation Bodybuilding: Anatomically-based Modeling of Human Bodies that could simulate the effects of muscular hypertrophy and add subcutaneous fat. Honestly, it was a bit terrifying watching the bodies inflate, and some of the more stylized, exaggerated models just looked like they had silicon injected throughout their body...
0 notes
Text
Apr. 21st
Today was a whole lot of uncanny valley, starting with Metric Telepresence, which created CODEC avatars with the intention of using them in real-time communication, e.g., two people conversing within VR. They maintain a parameterization that captures what the person is doing; these parameters are collapsed from a high-dimensional animated mesh, leaving them with just 100-some parameters to track. It is then the job of the encoder to figure out the relationships between the parameters and the original mesh such that the mesh can be collapsed into the parameters, and that the parameters can be re-inflated into a mesh. They capture all the information for a person’s 3D model with Mugsy, a contraption composed of 100 microphones, 160 cameras, and 450 lights. Mugsy looks terrifying, but it gets the job done. They then take in the head-mounted capture (HMC) video input from different points of view, and, in conjunction with the information Mugsy captured, use style transfer to convert the inputs into 3D meshes. Lastly, a differentiable renderer puts it all together into one who animated face. Knowing that not every future potential user would have access to Mugsy, the researchers conducted “The Great Human Survey,” wherein they collected a ton of data, bringing in a multitude of people to get their faces captured by Mugsy. Their goal is to use this database in the future to interpolate a 3D model given a single image of a person’s face. I personally thought that the avatars were not very expressive at all above the mouth; particularly the eyes/brows were lacking personality, and the eyes looked glassy. I did think that their texturizing methods and details like blood rushing into the forehead when someone scrunches their face were nice touches.
As a follow-up to the inexpressiveness of the eyes, the same researchers came out with “The Eyes Have It: An Integrated Eye and Face Model for Photorealistic Facial Animation.” They utilized multi-view gaze tracking, similar to their methods for tracking the face in their first paper.
We then discussed “Neural Voice Puppetry: Audio-driven Facial Reenactment.” Whereas Metric Telepresence required video input, Neural Voice Puppetry only take in voice input, which it uses to automatically animate a 3D model of a face talking. The results were... underwhelming. There was little to no emotion in the faces, and the lip-syncing was very mediocre. Plus, what are the applications of this kind of technology? They claim that the intention of their work is to humanize agents like Siri, but did they take a poll or conduct research to confirm that people would actually want that? I know I wouldn’t. Nathan also brought up a great point that this could be used in very unethical ways... does the potential benefit of this technology outweigh the potential risk?
Lastly, we looked at JALI, an animator-centric viseme model for expressive lip-syncing. Given a speech transcript and audio, JALI computes a phonetic alignment. Their method consists of decomposing the way we say things into a 2D space -- lip vs. jaw, which I thought was an intriguing insight. Animators can control the level of lip and jaw involvement using a “joystick” that navigates this coordinate space. They also use pitch and tone in the audio to help determine emotion. They used JALI in the game Cyber Punk 2077, but honestly, the results were rather lackluster. As with the other technologies we covered today, they lacked emotion and expressiveness.
0 notes
Text
Apr. 19th
Today began with the last set of paper presentations. Alyssa presented “Frequency-Domain Smoke Guiding,” which sought to keep the large-scale flow of smoke simulation while increasing resolution. Their results looked very compelling -- the smoke was very realistic, and it collided with obstacles very well. However, for some of the side-by-side comparisons with other methods, I thought the others (although quite different in appearance) still looked believable...? I suppose it’s just up to the artist what specific look they want. Then, Anne gave the very last paper presentation on “AnisoMPM: Animating Anisotropic Damage Mechanics.” The motivation behind this paper is that simulation dynamic fracture is hard; these authors proposed a method for animating the dynamic fracture of isotropic, transversely isotropic, and orthotropic materials. The pork belly, fish, and string cheese tearing looked alright, but perhaps a bit too rubbery; plus, they all seemed to fracture in very similar ways, despite being very different in real life. I was expecting the fish meat to be a little more jagged and rigid and the string cheese to be stringier.
Then, Prof. Pollard continued with deformables. We first took a look at “Meshless Deformations Based on Shape Matching,” which presented an approach for simulating deformable objects fast and stably, utilizing a geometrically-motivated underlying model. It handles point-based objects and does not require connectivity information (i.e., meshless). Whereas standard finite element approaches struggle to compress things to their flattest point, this paper’s demo video showed a rubber duck being squished completely flat, then bouncing back right away with no problem.
Next, we went over “Invertible Finite Elements for Robust Simulation of Large Deformation” and their algorithm for the finite element simulation of elastoplastic solids. The researchers were particularly seeking to resolve the element inversion that can result from standard finite element simulation algorithms. They demoed an elastic Buddha being smushed between two gears, which was horrifying to watch, but also displayed how stable the object was throughout deformation.
Lastly, we got to see two of Prof. Pollard’s papers, starting with “Direct Control of Simulated Non-human Characters,” which sought to make finite element simulations fast and interactive. The fish and starfish flopping on the pier looked great, and especially considering that it was animated interactively is even more impressive. Then, we discussed “Fast Simulation of Skeleton-Driven Deformable Body Characters.” Their method begins by computing nonlinear strain, which is used to compute stress, which is used to compute the elastic forces that are finally applied to the nodes. To speed up their algorithm, they utilized a coarse mesh. The trick to doing so was the determine how to distribute the mass and how much mass to apply to each point.
0 notes
Text
Apr. 14th
Today, we discussed deformables. We opened with the “Monster Mash” paper, which introduced a new sketch-based framework designed for casual 3D modeling and animation. The user sketches a character in 2D, and the system automatically inflates it into a 3D model, which the user can then click and drag at different points to animate it. As the user sketches, they can specify ordering constraints, thus allowing them to sketch entirely in 2D without having to change camera view. The demo makes it look incredibly easy and fun to use (Nikolas even managed to make a little model and move it around using the software during class, haha).
We then went over “Teddy: A Sketching Interface for 3D Freeform Design,” one of the first systems that automatically generates 3D shapes from 2D strokes. Their basic idea was to inflate closed regions and make wide areas fat and narrow areas thin, which produced blobby, pool floaty-looking shapes. Still, given that it was one of the trailblazing systems of this kind, it is very impressive.
Next, we took a look at “As-Rigid-As-Possible Shape Manipulation,” which transforms a deformation into a straightforward set of linear algebraic operations. Whereas typical approaches to shape deformation warp the space the model is in, which messes with the model’s shape, ARAP allows the user to add multiple constraints anywhere on the model and manipulate interactively while maintaining the shape of the object. They accomplish this by triangulating the shape internally, and then by applying a manipulation algorithm to the triangle mesh, all without requiring an explicit skeletal structure.
“Fast Automatic Skinning Transformations” took the ARAP idea and applied it to meshes, specifically aiming to modify the ARAP formulation such that it was “extremely efficient” for skinning deformations.
Lastly, we went through “Graphical Modeling and Animation of Brittle Fracture” and got a tutorial on finite element simulation. This method utilizes the lumped mass model, which approximates mass as all being concentrated in certain points. The algorithm is as follows: (1) compute strain, (2) compute stress, (3) compute node forces from stress, and (4) update node state from forces. The force equations resemble spring force equations, involving some constant and a deformation (displacement). There are just four parameters to update that define rigidity, resistance to change in volume, and how quickly kinetic energy is dissipated. The demo of the breaking bowls looked pretty cool, but the goopy demo was a bit weird; the goop seemed to have jittery, almost frilly, edges/surfaces.
0 notes
Text
Apr. 12th
Today was the third round of paper presentations!
Alice went first and presented “A Practical Octree Liquid Simulator with Adaptive Surface Resolution,” which sought to further discretize the surface of a liquid simulation in order to achieve better resolution. The simulation looked stunning in the videos, and the ways in which they chose to demo (e.g., the two blobs of water in the shapes of rabbits colliding) were so unique.
Then, Ryan Po presented “Surface-only Ferrofluids,” which outlines a method of simulating fluid via surface-only methods. This is such a cool substance to simulate, and their demos looked great -- particularly the one of some ferrofluid between two planes of glass and the labyrinthine pattern it made.
Ryan Zhang followed Po with “A Model for Soap Film Dynamics with Evolving Thickness.” Previous research on soap bubble, film, and foam simulations have mainly focused on the motion and geometric shape of the bubble surface but fail to address the evolution of the bubble’s thickness -- this is what the paper sought to model. It definitely made for more realistic-looking bubbles!
Next, Jamie discussed “An Implicit Compressible SPH Solver for Snow Simulation.” Existing snow solvers include hybrid Lagrangian and Eulerian techniques and particle-based methods, but this paper presented a novel Lagrangian snow approach based on SPH. The tire rolling on snow looked alright, but honestly, it looked more like some dry powder, like confectioner’s sugar or baking powder, than snow.
Lastly, Hesper managed to present from her car on the way to a COVID vaccination (yay for Hesper!) and went over “On Bubble Rings and Ink Chandelier.” Upon reading the title, I truly had no clue what was going on, but turns out these very niche simulations are very cool! The researchers did a great job replicating the physics of bubble rings -- the overall shapes of the bubble rings and the resulting collision pattern matched the live footage super well -- but the rings themselves didn’t look all that believable. The ink chandelier wasn’t nearly as impressive as the bubble rings, mainly due to the lack of ink dissipation, but they captured the general chandelier shape pretty well.
0 notes
Text
Apr. 7th
First class of April! Just over a month left until final projects are due, and our proposals are due Monday. Vaishnavi and I have already talked with each other and Prof. Pollard, so I’m not worried about getting the proposal done.
Onto class content: today was Eulerian fluid simulation. The advection-projection approach involves advection in the vector field, applying forces to accelerate/decelerate the particles, and then project to make the fluid incompressible, AKA divergence-free. This method can be made very stable and efficient, but it has one major fault, which is its susceptibility to dissipation.
Dissipation can occur during the advection and projection phases of the advection-project method. "An Advection-Reflection Solver for Detail-Preserving Fluid Simulation” employs advection-reflection as a method to prevent dissipation via advection. It accomplishes this by performing projection midway, thus preserving energy. This is a simple and effective method, and, visually, it definitely seems to have a lot of detail. However, in my opinion, the demos didn’t look realistic; I wasn’t even sure most times what type of fluid was actually being simulated. “A Second-Order Advection-Reflection Solver” also makes use of the advection-reflection method, with seemingly better results -- the ink in water simulation looked amazing.
We then discussed some of the solutions to prevent dissipation in projection, namely PIC, FLIP, and APIC. “A Perceptual Evaluation of Liquid Simulation Methods” showed us a bunch of different fluid simulation solvers juxtaposed with each other, which was very helpful in understanding their comparative strengths and weaknesses. Of all the ones presented, FLIP seemed to look the best for the water -- it had just the right level of detail and viscosity. Unfortunately, FLIP is not a very accurate method, and, while it has lots of energy, it also has lots of noise. On the other hand, APIC, as presented in the paper “The Affine Particle-in-Cell Method,” keeps energy more structured to the actual physics, i.e., less noisy. While PIC and FLIP involve transferring fluid velocities in the velocity field between particles and grids, thus failing to capture the overall velocity field behavior, APIC expands the particle definition to include neighbor velocities to better reflect the overall momentum of that region of the fluid. The ice cream and lava in the demo video for this paper were astonishing!
Lastly, we learned about SPH, a Lagrangian method for fluid flow simulation. In this method, the continuous medium is discretized into a set of particles that interact with each other and move at the fluid’s velocity. SPH does well to capture the overall movement of a body of fluid, but compressibility and rendering remain major issues for this method.
0 notes
Text
Mar. 31st
Today we dove into fluid simulation, specifically Eulerian fluid simulation. We started with the paper “Realistic Animation of Liquid,” which was published all the way back in 1996. The paper was co-authored by Nick Foster, who we learned was hired for the movie Antz for the very work he had done in the paper. Their method involved the Navier-Stokes equations, which we then reviewed in class. These equations describe how to use what we know about the forces acting to calculate the rate of change of state variables. We also learned about the extra bit in the state required to keep it divergence-free, which is necessary for incompressible fluids like water. Although the technology described in this paper was revolutionary at the time that it was published, Eulerian fluid simulation is difficult to make stable.
Then, we discussed “Real-time Fluid Dynamics for Games,” which introduces a simple and rapid implementation of a fluid dynamics solver for game engines. They employed slightly different equations from those of the prior paper -- there are similarities, like the advection term and viscosity as a factor, but they differ in their lack of a pressure gradient. The omittance of this is possible by projecting onto a divergence-free velocity, based on Helmholtz decomposition. Given that their demo video was from 2000, seeing their simulated smoke swirling around shapes was pretty impressive!
0 notes
Text
Mar. 29th
Today, people presented their work for miniProject2! Most people did cloth simulation, as expected, but everyone took such a unique direction, so it was awesome to see all the projects together. Nikolas was the first to present with his special effects cloth simulation, which skinned the cloth with video input from his camera, then manipulated the cloth based on the movement in the video. The cloth seemed a little too jittery, but the idea was very cool. Alice followed with her spring-mass cloth simulation -- her collision demos where the cloth fell on a sphere looked particularly impressive. Then, Alan presented his virtual music fountain -- a simulated music fountain scene that sprays water to the rhythm of the music playing. This was such an amazing idea and must have taken a lot of work! The scene was gorgeous, although the water looked too “blobby” (like the way it forms discrete droplets in zero gravity), so there weren’t really many distinct spurts of water. Because of this, the water didn’t really look like it was spurting to any particular rhythm, but overall, it was a great project and, given the scope and time frame, turned out well. Max went next with his cloth simulations -- his self-collisions were especially impressive! The cloth fold demo looked really good. The cloth jittered sometimes upon bunching up, but otherwise looked quite realistic. Zhengyang’s project, like mine, was done in AMCViewer. Given that I also did mine in AMCViewer, I was astonished at how big a time step he was able to achieve and how great his cloth looked. He also managed to do RK4, which I could not get to work for the life of me. Then, Vaishnavi presented her cloth simulation. Like me, she implemented symplectic Euler and Verlet. I liked the GUI she made to interact with the cloth parameters live, and her cloth looked beautifully silky. Ryan Po went after Vaishnavi, with yet another cloth simulation. I thought it was awesome that he attempted tearing -- the results came out kind of weird, but it was a solid attempt nonetheless. Lastly, Hesper presented her fluid simulation, which she implemented with zero starter code! Very brave, indeed.
0 notes
Text
Mar. 24th
Today was the last of the final project pitches. Alyssa plans to implement something based off of “Real-time Motion Retargeting to Highly Varied User-Created Morphology,” which proposes a novel system for animating characters whose morphologies are unknown at the time the animation is created. The project seems really ambitious, but also very cool -- I’m intrigued to see what Alyssa ends up creating. Then, Emma pitched her idea for simulating a jello cube with a particle mass spring system using particle grids. It would be interesting if she could extend it to Maya and expand from there. Zhengyang was the last to pitch his project idea for simulating flocks using the Boids model.
After final project pitches, Prof. Pollard went over some character optimization papers, the first being “A Neural Circuitry That Emphasizes Spinal Feedback to Generate Diverse Behaviors of Human Locomotion.” This paper proposes a musculoskeletal system according to reflexes, which is based on how, in real life, walking is not controlled by the brain so much as reflexes in the spinal cord. I thought this was a very interesting application, but the walking didn’t look quite believable. The steps were good, but the tilt forward was too extreme, and the hips barely rotated, if at all.
We then watched some example animations where accelerations are optimized, in particular, someone swinging on monkey bars, someone running, and someone hopping from peg to peg. The motion was very smooth, but not at all realistic. Prof. Pollard explained that this is because the optimization of accelerations leads to very high torques, resulting in motion that makes the character appear inhumanly strong...
Lastly, we examined the paper “Flexible Muscle-based Locomotion for Bipedal Creatures,” one of whose authors is also the author of the Motion Doodles paper! In this paper, they make muscle-based simulation (based on sending muscle activation levels) applicable to different characters, who are able to learn varied gates. The walks look really good -- the turns are especially impressive! It all looked very smooth. I’ve actually come across clips of their demo video before -- I think it’s become rather viral, since the clip of the different generations of bipedal dinosaurs walking side by side with wildly different attitudes and levels of stability is pretty hilarious.
0 notes
Text
Mar. 22nd
Today was the first day of final project pitches. Nikolas went first and pitched his idea for 3D “scanimations,” which automatically rigs and animates 3D scanned objects. It sounds like an ambitious project, but Nikolas seems to have thought this through and has some existing work that he can apply to it. I went next and presented my pitches for some hair simulation project ideas. Prof. Pollard helped me narrow my scope and gave me a general way to break down my project, which was very useful feedback. One of the things she suggested was that I figure out hair collisions with the body, e.g., head and shoulders, and that I could try implementing collisions in my Miniproject2 to get familiar with it first. Max and Alice as a team and Zoltan both intend to work with PBD fluid simulation based on the paper “Unified Particle Physics for Real-time Applications,” although Max and Alice are using Scotty3D, and Zoltan is using Unity. Hesper is implementing the paper “Stable, Circulation-Preserving, Simplical Fluids,” which proposes a novel technique for the simulation of fluid flows. It sounds difficult, but Hesper seems excited about this paper, and I’m sure she’ll be able to produce something great. Next, Vaishnavi pitched her final project idea, which is actually based off of one of the same papers I referenced for my pitch, “A Mass Spring Model for Hair Simulation” by Selle et al. She seems more geared towards testing different hair models, e.g., curly hair. After Vaishnavi, Ryan pitched his project, which would implement the “Real-time Fluid Dynamics for Games” paper. He intends to translate the existing code and ensure that it works as expected, so this seems like a solid project idea with a clear path. Then, Alan presented his idea for full-VR keyframing data, wherein the six degrees of freedom in VR can be utilized to animate more easily. This is by far the coolest pitch I saw today and something I’ve imagined myself... I remember animating in Maya for Miniproject1 and wishing so badly for technology like this. I’d love to see what he comes up with. Aayushya followed with his project pitch, which would be an extension of his research -- waveform relaxation to solve ODEs. Both Anne and Jamie plan to use MPM -- Anne intends to employ it to simulate elastic, squishy solids, and Jamie intends to simulate snow dynamics, both of which look difficult but awesome. Lastly, the other Ryan (Po), pitched his idea for grasp analysis using deformable fingers. The images from the reference paper of a hand grasping something didn’t look all that believable...? But hopefully Ryan can produce something better.
0 notes
Text
Mar. 17th
Today was the second set of paper reviews. Zoltan was the first to present -- his paper was “A Bending Model for Nodal Discretization of Yarn-level Cloth,” which proposes a controllable, efficient bending model for yarn-level cloth to allow for the simulation of high-resolution wrinkles. The stretch test of the rib-knit fabric and wrinkling looked pretty good, but honestly, all of their results felt a little lacking in believability. The cloth still looked plastic at times and was sometimes too wrinkly.
Next, Emma presented “A Safe and Fast Repulsion Method for GPU-based Cloth Self-collisions.” This paper introduces a novel approach for safe, correct, and efficient repulsion-based collision handling, for which research is limited and GPU-unfriendly. The knot simulation demo was particularly impressive -- definitely a big difference between with and without adaptive mesh resampling. Emma noted that it was cool to see a graphics paper going into the GPU implementation, which is not too common, and I agreed.
Then, Zhengyang presented the paper “Adaptive Merging for Rigid Body Simulation.” In rigid body simulations, a lot of bodies are static relative to each other in consecutive frames. This paper exploits this to reduce computation time by merging collections of bodies when they share a common spatial velocity. This concept sounds incredibly clever, and the results speak for themselves -- their simulations looked pretty darn close to the ground truth.
Aayusha went next with a discussion on “Incremental Potential Contact: Intersection and Inversion-free Large-deformation Dynamics.” Contact and frictional forces are naturally discontinuous, which creates a stiff time-stepping problem. In the results, the ropes simulation looked very cool, and it was quite impressive how it was able to handle so many collisions without inversions. It was also interesting to see their results juxtaposed with other solvers; theirs looks much more stable and realistic.
Lastly, Max presented “Detailed Rigid Body Simulation with Extended Position-based Dynamics.” The method by which only position is kept track of was formulated only for particles; this paper sought to extend this to rigid bodies. It seems they succeeded in doing so -- the simulation of the marbles was particularly remarkable.
0 notes
Text
Mar. 15th
We started class today with an intro into final projects, for which pitches are due next week. I was not aware of this until today, so I'll definitely need to start brainstorming ASAP. I don't really have any ideas, so it was helpful to look through some project ideas together in class. I found the animation by dragging, plant growth, and motion doodles projects to be particularly interesting.
Once we were done going over final project logistics, we began today's discussion of rigid bodies and contacts and the different ways in which people approach collisions. We first covered some collision basics, like the state features (e.g., rotation matrix, linear and angular momentum) that need to be updated upon contact. We talked about soft collisions, wherein the two bodies making contact experience deformation, although this deformation may not always be discernable in a large, hard surface like the floor. With respect the forces involved, we can consider them in terms of being applied over time or as an impulse. To find collisions, we can determine their time by backtracking via the bisection method, which is common, albeit slow; and we can detect collision points using a separating plane.
Then, we went into computing impulses uses a coefficient of restitution, where impulses are applied in a local contact resolution scheme using Newton's law for restitution. This method is easy and intuitive, but it results in a "bubbly" kind of motion for the object, as it is constantly sinking into and being pushed out of surfaces.
Lastly, we covered penalty-based and constraint-based methods. Penalty-based methods prevent penetration by calculating a penalty force as if it were a spring. Such methods also suffer from that "boiling" motion (unless performed perfectly) and is hard to stabilize, but it is trivial to implement and scales well, because we only need to look at existing collisions (rather than searching for upcoming ones). The constraint-based method computes constraint forces such that they precisely cancel out any external accelerations that would lead to interpenetration. This is really only useful when the scene is composed of a small number of simple-shaped objects, and it does not scale well with the complexity of the scene, as penalty-based methods do.
0 notes
Text
Mar. 10th
Today, we went over some major developments in cloth simulation. We started by taking a look at the “Robust Treatment of Collisions, Contact, and Friction for Cloth Animation” paper, which was written in 2002 but has stood the test of time. This paper looks particularly useful for the Miniproject2 cloth simulation options, which I intend to try! We looked primarily at how the paper handled strain rate to avoid rubbery behavior (which they accomplished using momentum-conserving impulses), repulsions between cloth in close proximity to reduce collisions, and friction for realistic dragging across surfaces, plus a little review of Coulomb friction. The demo video looked so good! The cloth wrinkled and folded very nicely without tangling, and it didn’t appear rubbery at all. The only weird part was when the entire cloth fell from the dresser -- those folds didn’t look quite right...? Then, we took a look at “Data-driven Elastic Models for Cloth: Modeling and Measurement.” The process begins by putting fabric swatches into a precisely weighted rig that stretches the fabric to learn how the material deforms. The algorithm utilizes a large feature set to describe the cloth; it’s essentially one big optimization problem. The different types of folds and rigidity/softness came through pretty well in the demo. Up next, we went over “Adaptive Anisotropic Remeshing for Cloth Simulation.” Because cloth has so many nodes to simulate at once, cloth simulations tend to be computationally complex and not very efficient. This paper presents a technique for dynamically refining and coarsening a finite element mesh such that elements are concentrated in detailed regions for efficient, inexpensive computation. They reported that they were able to significantly cut down processing time from 60+ sec/frame to 9 sec/frame, which is astonishing, and the results look pretty good! It wasn’t perfect, of course -- the flag, especially, looked a tad wonky. The ripples passing through the flag produced a lot of heavily concentrated wrinkly areas, while the rest of the flag would be completely smooth and flat. We then discussed the “Near-exhaustive Precomputation of Secondary Cloth Effects” paper, which proposes a method wherein, for certain constrained situations, all clothing configurations can be stored ahead of time. This allows for an efficient and simple simulation, but the storing part sounds like a pain, and is this really practical? It doesn’t seem like this could be widely adopted. Lastly, we briefly went into “A Pixel-based Framework for Data-driven Clothing,” which talks about learning cloth deformations as a function of body pose. The simulations looked quite close to the ground truth, albeit not a perfect match and somewhat bumpier.
0 notes
Text
Mar. 8th
Today, we started getting into simulation -- specifically, building a simulator for forward simulation of particles. We first examined the basic Euler method, but Euler integration is not stable, e.g., it is incapable of achieving accurate spring motion. We then discussed implicit Euler integration, which is more stable, but damps the system and is computationally complex, as it requires solving a system of equations. Next, we went over symplectic Euler, an area-preserving method wherein the energy of the system remains constant over time. It is quick (unlike implicit Euler, there is no need to solve a system of equations) and produces a decent approximation of spring motion. However, it is not unconditionally stable. We also briefly covered Verlet and Leapfrog, some other common integrators for particle simulation. Then, we learned about RK4, whose higher order approximation allows for larger timesteps. It yields a better approximation of the spring motion but also requires more computation. Moving on from particles, we also discussed the simulation of rigid bodies and quaternions. Lastly, Miniproject2 was released, and Arjun gave us an explanation of what project options will be available, as well as some demos of his own work with simulating cloths. Prof. Pollard also showed us a past student’s project on constraint-based cloth, where they were able to interact with a simulation via a touchscreen. They could draw a rectangle to produce a cloth, and then pinch, drag, and even tear it apart! The cloth’s movement looked incredibly realistic, too.
0 notes