Jump to content


  • Content Count

  • Joined

  • Last visited

  • Days Won


Everything posted by everfresh

  1. if you don't connect it to the head it will fall off as soon ad you move the head. as for the how i did this just take a look at the softbody settings. i think chris schmidt from gsg did a tutorial once where he explained all the softbody settings in detail, maybe you want to also take a look at it to get a better understanding.
  2. i wouldn't necessarily do this with cloth... very tricky to get right... you could also do it via softbody or just a simple dynamic joint chain. each solution is giving different looks of course. for the cloth solution you might wanna belt a selection of your geometry to the head, for the softbody you'd have to attach it with a connector. also a constraint tag for the bobble to follow the tip won't work in any case, a constraint looks at the axis of an object, not the point location (except for clamp constraint set to surface mode, but unfortunately it's a rather buggy constraint which i haven't seen anyone get to work flawlessly in a more complex setup like this)... if you're going for softbody you could make everything one single piece of geo and work with vertex maps to set different degrees of stiffness to the object. the easiest way to do it would be the dynamic ik-joint chain route, but that won't give you proper collisions of the surface (in case you really need that)... although you could also combine that with a collision deformer. here's all 3 approaches, you might wanna dial in the settings a bit better though.... santa_hat_ik.c4d santa_hat_softbody.c4d santa_hat_cloth.c4d
  3. yeah, i wish that was possible, too. for the hands in my rig i just made them all as one single component, but i face the same issues with my facial rig for instance. i wish i could just make placeholder xpresso nodes relying on a certain naming convention so i wouldn't have to hook up all my pose morphs manually for each character. i'm pretty sure this could be scripted though. the script would just have to scan the scene for objects and tags with the defined names and replace the xpresso nodes with them.
  4. that was really fun to watch. especially enjoyed the camera work!
  5. mostly just musicians who want a music video like that but have no money :D other than that a few festivals asked if they could screen it. but it was a nice experience to finally manage to get a staff pick anyways.
  6. you have to use name mode. also you have to type in the prefixes L_ and R_ ... that's for painting limbs like arms and legs for instance... when you're painting weights for joints that are located at x0 like the spine for instance you want to use single mode. because if you paint those center joints in name mode and paint a stroke from one side to another your weights will get doubled, just not for those points located at x0. i recommend watching a tutorial on weight painting, much easier to understand visually. there's an older one on cineversity by bret bays, but the basics haven't changed.
  7. no problem... don't hesitate to ask if you have any further questions...
  8. it's always a bit tricky to look at a file someone else created and tell exactly where it went south ;) what i can see though looking at the weights manager that you have a lot of un-normalized weights going on, that means the total weight for a point weighted to several different joints isn't at 100%. what i also can see that something is very weird. the hand/forearm twist on the right side isn't behaving like the left. any chance you accidentally mirrored the whole rig instead of just the weights at some point? as a general workflow tip: instead of weighting one side and then mirroring it, you could also use the mirror function within the weight paint tool, which allows you to paint the weights on both sides simultaneously. the mirror tool can give headaches from time to time, that's why i tend to avoid it as much as possible.
  9. like i mentioned earlier, you need to get your character back to the initial bind pose, then deactivate your skin object, THEN make the adjustments (shift axis, make limbs longer etc.), then hit the reset bind pose in your weight tags and then enable the skin object again. then nothing should shift. no need to rebind anything.
  10. yes, that is a problem... the mirror functions look at the position of the points in relation to their local axis, not the world. so it's important to have the axis of all your meshes a x=0.
  11. the bend slider in the advanced biped doesn't actually bend the limb, at 100% it just shows the bend controls which you then have to move to get the bend. didn't have the chance to look at your file yet, but probably the pinching happens because the weights didn't get mirrored correctly, so just manually fixing the weights should work. edit: took a look at your file and as suspected, just a weighting issue. you need to correct the weights for the forearm and also the hand/wrist section.
  12. not entirely sure what your desired workflow would be exactly, but here's how you could go about it. rig you first character entirely, then just duplicate that first character with the reaccuring limbs entirely, delete the mesh parts that are unique for that character and replace them with the new ones. or rig one limb (or a set of limbs), put the joints, controllers and mesh in a null, duplicate that null and copy it to the new character. but this method could be tricky depending on your overall rig setup. i'm pretty sure you can replace joints in your weight tag (never had the need for it tbh). but if it's just a limb weight painting should only take a couple of minutes each anyways. whenever i had a situation like that i went with my first suggestion, if your characters have different proportions you can adjust the rig after skinning and weighting, just make sure you deactivate the skin tag, then in your weight tag hit the "set bind pose" button, and then activate the skin tag again. i hope that helps.
  13. everfresh

    Danger Zone WIP (NSFW)

    haha.. sorry, this information must have slipped when i was flying over your post. :D well, then i have nothing ;)
  14. everfresh

    Danger Zone WIP (NSFW)

    love it. if you're looking for criticism the only thing i've got is towards the end there's two screens with just a still image (skull with snake for instance) where nothing's moving. gave me the impression you kinda rushed towards the end (i know how it is if you want something to be finished already)...
  15. it's alive! and you are, too ;) but where's the audio???
  16. everfresh

    HUD integrated control interface?

    just drag you user data into the viewport... shift click those elements in the vp then to select multiple and then right click >> make group. ctrl-click and drag the elements to move them.
  17. here's an easy way to do it: make sure you have nulls to control your spline, you can either do that via the free spline rig script or simply with a tracer set to connect objects. once you have your mouth shape spline you can make that match the mouth hole and sweep it. then just put a joint as a child of each null that form your mouth shape and weight your mesh to them. now as soon as you move your nulls which form the mouth shape the mouth mesh behind it will follow. you can also put all the mouth shape nulls into another parent null to move and scale them all at once.
  18. everfresh

    crowds in c4d?

    you can do this in c4d, with the new instance modes you can even do bigger crowds now with animated characters. i'd recommend to export each character walk loop as alembic and clone those, more performant. if you have a dense crowd you might be facing the issue of collisions. the push apart effector could help in this case, if it makes everything a bit jittery it helps to lower the iterations (just found this out a couple of days ago, before i always tried to increase that number) and throw in a delay effector afterwards.
  19. did anybody here successfully manage to bake a displacement tangent vector in c4d and achieve a displacement result in redshift with that map that is accurate? i believe i tried everything, baking it out as 8/16/32bit tiff/exr/png, you name it. the baked result looks exactly like the sculpt in a c4d material with standard renderer, but in redshift it's always off. i get the displacement going, that's not the issue, it's just no where near accuracy and i'm also getting shading artefacts. and yes, i set the redshift displacement node to vector and tangent and i also played with the min/max old/new values as pointed out in the redshift forums in various topics i browsed through... spent hours trying to get it to work properly, no luck.
  20. everfresh

    GI flickering in animation

    have you set primary and secondary GI method to qmc? also make sure you set the sampling to "high"...
  21. everfresh

    Illumination (animation)

    looks really good, i like the overall mood a lot. one little detail that catched my eye though: when the paper boat spins it looked a little unnatural, mainly because the water ripples didn't change their behavior accordingly. also if there was some very slight banking and bobbing up and down of the boat might help.
  22. everfresh

    Ideas about an advertisement?

    make a motorcycle dented into the bus chassis like the one with the sign and make the buses windows crashed in form of a human silhouette. maybe some blood spatters if they're up for it (highly doubt it, but worth giving it a shot).
  23. i could be wrong, but i think i had that even with baked xp sims with meshing involved. and yeah, you're right about the farm rendering issue. have you tried baking it to alembic?
  24. i'd do that with meshes, just much more convenient to animate than textures. the easiest way to do this is to just use a straight mesh for the windshiled, and straight cylinders for the eyes and just bend everything with deformers... file attached. bend_eyes.c4d
  25. i don't think it can be called a bug. it's only the first frame that has this prep time, so if you render a sequence from let's say frame 300 to 1000 only frame 300 will have this additional preparing time. the picture viewer just hast to play back the timeline until it is at that certain frame, each following frame will just have to calculate itself, so it renders more or less immediately. to me it just seems like the normal behavior of the picture viewer, which i'm sure MAXON is aware of, but haven't found a solution to solve it otherwise. if you run for instance an x-particles sim with complex stuff in it where you get a vp frame rate of only 1 or 2 fps you will wait several minutes for that first frame in the middle of your animation. it's always been that way, at least since i'm using c4d.