It worked quite well, but the one thing i really took from this was the naming convetions. It was quite jumpy, but we added jitter removal and it was a lot smoother Using a free rigged model from the internet we tested it out. You can then track forward in your area of interest creating the keyframes onto the skin that contains a skeleton. The straighter and a more accurate the actor can create a t-pose the better and quicker the skin will retarget the data. The t-pose is important, it is used to align the skin in IPI studio, with the depth map data recored. Once we had that data, we sent it into IPI studio and imported the scene we had just recored. Then recorded Andy in t-pose for a couple of seconds which is needed later on and did about 6 seconds of movements and saved the scene. But before that we captured the background plate to tell the software that that isn’t the area of interest. We used one camera and got Andy to jump around a bit so we had some footage to play with. I could see the results developing quickly. The results I got were good, its fascinating stuff, a definate interest of mine. I did a mocap test with Annabeth, showing me how to use it, what the result would be and is it cost effective in terms of time. There is a human IK preset rig in Maya that I can practice with and some pre-captured data on the website I can use. I intend to have a play with this software before doing the real thing, as I will need to learn a few things. It is marker less and can be done with web cameras or Kinnect cameras. I did consider using the exoskeleton motion capture suit here but it does not allow the performer great flexibility and a very accurate recording. The animation will be taken from a dancer so they need to be able to move freely. For the animation of my character I wish to motion capture it.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |