Jump to content


Regular Member
  • Content Count

  • Joined

  • Last visited

Community Reputation

1 Noble Beginner

Profile Information

  • First Name
  • Last Name

Cinema 4D Information

  • C4D Version

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Hey guys I am working on a scene where I am simulating ropes (https://www.youtube.com/watch?v=lhXiD280OdA&t=1285s) I have a hair object on the ropes. Everything works fine and my Octane live viewer correctly shows the hair, which stays on the ropes during the entire simulation. However, when I render out my scene, the hair stays "stuck", frozen at the position of the rope on frame 1. Meaning the simulation works, the ropes animate and interact with eachother, but the hair stays frozen at their initial position... I tried caching, I tried adding an Octane Object Tag and check "hair", I checked "hair" in the render settings etc.. Nothing works. Anyone knows what the issue could be ? Thank you
  2. Thank you both for your answers. Let me clarify my workflow and why I need to do this at all: In the final stage of my animation, I will be using the high poly version of my Alembic animation. The thing is, I am cloning hundreds of copies of the character, to export that as one new Alembic, so I can import that in Marvelous Designer to create a cloth simulation. Now, importing hundreds of high poly animations instantly freezes Marvelous Designer. So what I am trying to do is 1- Make temporary low poly versions of my character animation 2- Have the cloth simulation in Marvelous go smooth 3- Import the cloth animation Alembic back in C4D 4- Use my high poly character for the final render Basically I want to use a low poly character to simulate a cloth animation, so that I can render the final scene in C4D using the high poly character. It will save me hours of calculating time with the same result. The motion of the character is driving my cloth animation, so whether it's high or low poly doesn't matter for the end result. So I'd rather do the simulation with a low poly and then render out with the high poly. (I will try your idea to make a low poly character, BEFORE putting it through Mixamo. That might work)
  3. Is there a way to reduce the polygons of an Alembic animation ? The object is a Mixamo character. I tried the Polygon Reduction generator in R21, it works, but it has to recalculate for every frame. It makes the entire process extremely slow. How do I cache the animation WITH the reduced polygons object ? When I try to point cache my alembic animation, the Polygon Reduction keeps calculating for every frame. Thank you
  4. I ended up not using CV-AR which is why I made a new post. I now have a full head instead of just a face (which is what CV-AR gives you). I can't just copy the data to a rigged mixamo face because it's a custom model. But you just gave me an idea : if I take my t-pose Mixamo character and replace the head with my custom rigged head, I should be able to put that model through Mixamo for the body, and THEN copy paste the data from the head to the now animated Mixamo character, because it will have the same face rig. Not sure if that'll work though...
  5. Hey everyone, I did some face capture on a rigged head. The model is an FBX head with a neck. I would like to attach that head to a Mixamo animated character. Is there a correct way of doing this ? I found out attaching the animated head to the "neck" joint semi-works because the head follows the body, but the mesh doesn't deform in the neck as it should. You can tell it's two different meshes completely. As always, any tip is appreciated Thank you
  6. I figured out in the meantime that this is the problem and solution... There is no "universal" or "standard" blend shape structure. A blinking left eye can be named differently in your target. So you have to search manually... I managed to connect some blendshapes from my capture to my target, obvious ones like "MouthOpen" and "LeftEyeBlink". That worked, but the target was far from showing the accurate facial expressions that the capture had. It was just named completely differently in the target face rig, which made it a pain and ultimately impossible for me to develop a workflow that would be time efficient. I can post a CV-AR capture if you want.
  7. Hey man, could you share your Xpresso rig and rigged head model ? I am currently struggling with CV-AR and ned a working retarget example to know how I can do it myself using a custom head model... This would save my life for this project
  8. Thank you so much for your reply, and great work on CV-AR ! I have some follow up questions if that's okay ? I've never rigged a face, and I never worked with Xpresso. I am doing research right now to understand what FACS is. I am going to have my model's face rigged, and try to have it animated using the Xpresso that CV-AR automatically makes. My question is the following : Assuming I have a rigged face, how easy is the setup to transfer the CV-AR data to my rigged face ? Especially for someone who's never worked with blend shapes and Xpresso. What do I need to do specifically ?
  9. Hi everyone I'm currently struggling with a project. Let me try to be as clear as possible : I am basically trying to animate a face on a character that has a Mixamo mocap animation on it. I am using CV-AR to record facial mocap with an iPhone 11, but the C4D plugin gives me a face (not a head) with a bunch of Xpresso premade for me. If I wanted to transfer that mocap data to a custom model of a head, or to the head on the Mixamo character, what would I need to do ? Do I need to rig the face ? If so, how do I do that ? Do I use FACS / blendshapes ? Is there a specific rig I need to use so that it works with the Xpresso I got from the CV-AR data ? I'm very new to mocap and I am stuck with this part of the project right now. Any help is appreciated Please don't hesitate to ask if you need extra info Thank you. I hope someone can help me with this
  • Create New...