Jump to content

esmall

Regular Member
  • Content Count

    235
  • Joined

  • Last visited

  • Days Won

    2

Everything posted by esmall

  1. try using 3D - Spherical gradient. This uses as a spherical gradient in world space. You can set the world space in the corresponding Start coordinates. Alternatively, you could try using a Proximal inside of a Distort Shader.
  2. isntead of xpresso, try just using a Selection Object. Select all of your SDS objects, then go to Select > Selection Filter > Create Selection Object. This creates a null object w an Object tab. The tab has a list of all your SDS objects that were selected at the time of creation, and a button for "Restore Selection." You can press the button to get all your object selected, then in the Basic tab, check/uncheck Enable. Alternatively, clicking the closed eyeball in the upper right of the obj mgr will open a window that lets you select objects, tags, layers, etc. Here you could right click the SDS object, and choose Select All, and perform the same enable/disable function as in the previous paragraph. Eric
  3. I'd try using a linear cloner of cubes touching edge to edge and apply a transparent material with refraction. Then I'd fiddle with the Fillet radius of the cube, and the refraction value of the material. see attached (the camera mapped image of an apple is NOT mine, it is used for reference, and was located using google image search). hope this helps. reededGlass.c4d
  4. The surface deformer calculates based on the object's UV layout. Your object's UV's are not properly laid out. To get the MoText to project, you'll have to either current state to object (to polygonal object), or do each letter separate. Since MoText is procedural, it does not have UVs. Because of this, I believe each letter is assigned the same UV space, so it will project, but all the letters are stack on top of each other. Looks like you are using 17. I saved this in 19. In case you can't open, I took a screen shot of the UVs and a screen shot of the viewport. Hope this helps project_test_UVs.c4d
  5. the potential issue with doing that is if your cube's position is animated, the cube's axis will still remain animated in that location, but the actual cube's location may differ. See attached: the axis of both cubes are animated in the same location, but the cube's are in different places. axisLocation.c4d
  6. It's hard to tell without a test file, or knowing what you're animating (figure? object? complicated? simple?), but most likely, no. Animated properties such as Position, Scale, or Rotation (PSR) is oriented on the object's axis. Changing the axis would result in altering the animation. One possible solution would be to create a null object, and set a keyframe in all PSR properties you have animated. In the timeline, copy the keyframes from the original object, and paste them to the null object (this copy/paste is done in the timeline's Edit menu). Now delete the keyframes from the original object, put the original object as a child of the null that just received the pasted keyframes. From there, you'll have the ability to adjust the original object's axis, tweak its position, etc. See the attached c4d file. animationTransferred.c4d
  7. It's hard to tell without a scene file, or seeing an image of what you're trying to model (it looks like a flange? Or cap of sorts?) But looking at your axis, it appears that your cloner is not in the center, or hub, from where you want the cloned objects evenly spaced. Try using a Radial Cloner centered at the middle of the circle (see screenshot with blue arrow), and adjust the start/end angle of the radial cloner.
  8. Echoing Ninjad's assessment of Lens Blur, it is a hotmess. If you really need it, go ahead and use it. Another downfall of it, it is a single core effect. This means your entire comp will only render w 1 core of your computer, regardless of how many cores you have. Even if this effect is applied to 10 frames of a 1000 frame render, all 1000 frame will only render w 1 core. If you can spare the extra money, invest in Frischluft's Lenscare plugin. It does a GREAT job of applying DoF. This plugin does a very convincing job of blurring edges of high contrast/large distance objects. But yes, to get truly accurate blurring, you'd need to apply it in 3d at render time. Unfortunately, this limits the flexibility in post. There's always a tradeoff.
  9. if you have it rigged to your liking, you'll want to select all rig components and any mesh you want to make symmetrical, and go to Character > Mirror. You'll prob need to fiddle w the settings and/or watch some tuts on how to properly use this tool, but this is what you want. A symmetry object will duplicate what you have on the other side, so you won't be able to animate each side independently. Mirror function in the Character menu will duplicate the actual object(s), tag(s), weighting, etc.
  10. hmm, not sure what's up on your end, but I got it to work with the 2 images you provided. See my comp and screen shot of effect settings, see if something looks different to you.
  11. Alternative to the Tracer (a mighty fine suggestion) an interesting but little known function in the timeline might help you (if your object is freely keyframed in space): Select your object in the Dope Sheet (not position track), go to Functions (in timeline menu), and select the command Position Track to Spline. This will do what its name suggests and make a spline based on the object origin's movement through space. This would leave you to hand animate (or setup some xpresso) to animate the end/start growth of the sweep nurb. but hey, at least it'd give you an exact spline based on the object's path through space!
  12. thanks for the confirmation. I emailed them on Mar 3 regarding the issue, sent files/screenshots, etc. And I just followed up this morning. I figure if I continually follow up with MAXON, 1 of the following 2 sayings will become relevant: - the squeaky wheel gets the grease - the tallest blade of grass gets clipped first I hope for the former!!
  13. Has anyone experienced this issue and/or found a solution? In R19, when using any form of multipass, I end up with additional RGB and Alpha passes. I render as PNG with a straight alpha, therefore I do not need either of these passes. If rendering a "Regular Image" only, these passes are not generated. If rendering any additional multipasses, these passes are created, despite not being added as passes. See attached screenshots: The picture viewer is showing the appropriate passes: Background, Alpha (this is always shown in pic viewer, but never saved as an actual file w PNG images), Object Buffers 1 & 2, and Depth. In the finder, you can see the actual images generated: Background/beauty image, obj buff 1/2, depth, but also there are the unwanted rgb and alpha images. Am I missing a setting somewhere to disable this pass generation? Or is this a bug? This issue is plaguing my entire studio, so it's not just me. And I have seen posts on other forums asking about solutions. I've attached a c4d file that demonstrates this. Thanks in advance. extraPasses.c4d
  14. I would use a Shader effector, then a material that uses a MoGraph Color Shader. In the attached file, I made a separate material and used the color shader in the alpha, so I could control colors in separate materials. But you could just as easily use a single material by defining the base cube color in the cloner itself, and the changing color in the Shader effector's Shading parameter, and put a MoGraph Color Shader in the color channel of the cube's material. The actual change in color is controlled by the width of the Shader falloff. cubeRotateColorChange - mograph.c4d
  15. how about a plain effector set to rotation mode? See attached. cubeRotate - mograph.c4d
  16. I think some clever modeling, either with some extrudes to give the appearance of plates that make up the lips, or separate/discrete geometry. Either way, I think the pose morph would be your friend here. See the attached files. One would be a setup using a continuous mesh, the other would be using separate geometry. Both setups are using the posemorph tag. robotMouth.c4d robotMouth 2.c4d
  17. To echo Cerbera's comment, I would approaching this as a mechanical rig. Look up some tutorials on this topic (search for robotic rig, mechanical rig, etc. Another suggestion: go to a hardware store and pickup an actual SOSS hinge in order to appreciate the intricacies of such a hinge. Being able to put hands on such a piece of hardware will reveal a lot of the fine details. Or at the very least, find a good video of a hinge in action, and watch it frame by frame, forwards and backwards. E
  18. sounds like your xpresso setup is using Absolute nodes instead of Relative nodes, to reference the path of the object from which you're trying to create a trail Question for you: Why not just use the MoGraph Tracer object, then Sweet the Tracer? I think I've seen some Xpresso setups online to create trails from objects' paths, but usually those are done for people who don't have C4D Studio version (lacking MoGraph). **Edit** by Sweet the Tracer, of course, I mean, SWEEP the tracer
  19. to the best of my knowledge, the answer to your question is "no." And rightfully so in my opinion. Imagine how confusing that would become if you opened a file that had been setup that way a year from now! Or if someone else had to dissect your file. Having referenced or instance nodes in xpresso setups would get very confusing very quick. Like you said, I think your best bet is to create some user data as your "bridge" (that's a great term for it, by the way!) Whether you do it this way, or you figure out a way to create your instanced node, make sure your priorities are correct!! Tags are calculated left to right, top to bottom. Having them out of order will cause calculations to be computed out of order, leading to rig lag. You can always tweak the priorities in the Xpresso tag's Basic Properties if you run into issues.
  20. Animating sutures accurately/believably is no small feat. I see you're using C4D Lite in your bio, is that accurate? That may seriously limit you. What kind of suture are you doing? (the attached image is not mine, but from a simple google image search for suture reference) Long story short: this shot can be as easy or complex as you want to make it. You could draw out your final spline, including the knot, and animate the End of the Sweep Nurb. At the opposite end of the spectrum, you could include hand animated tissue dynamics of the thread passing through the skin, thread dynamics using something like the PointAutoRig.py script (look for that from CuriousAnimal). Or you could go anywhere in between. Good luck!
  21. If you've paid for MAXON's MSA, you have a free membership to Cineversity. Just email MAXON and ask for your coupon code. Otherwise, without some coding, you won't be able to render 6 cameras at one time. If this is for a paying job, and you don't have the MSA/free Cinversity membership, I would think this might be worth the expense of a Cineversity membership. Or justification for upgrading to R19. Work smarter not harder!
  22. Cineversity has a plugin toolbox called CT-Toolbox. From this plugin, you can download a plugin called CV-VR cam. This will allow you to render an equirectangular image/image sequence. This is lightyears better than baking a texture because it will utilize all cores of your machine, and also opens up the ability to render across a network. This results in the same thing as the spherical camera in R19. No, you won't be able to "scale up" the camera. I have not attempted what you're describing yet, but giving it some thought, try thinking about it backwards. You want a globe projected onto the pufferfish sphere, and you're looking at it from the outside. 360 animations are basically animations projected on the inside of a sphere that you can look around within. What if you could get "outside" of that sphere? You'd see the image, but backwards. So to get the globe to project onto the pufferfish, you'd want to map an image of the world onto a sphere, then invert it's X coordinates (Y should be the same). When the image is projected onto the pufferfish, viewed from outside, the world should be correct. Getting the particles to render should be a matter of creating the effect just on the inside of the sphere w the inverted map of the world. Once projected, this should appear to be on the surface of the globe. I am not sure how to get the offset effect, however, of the particles/globe. All of this is based on some critical thinking about the method, and using my experience creating 360 animations. Hope it gives you a starting point. Eric
  23. Well I guess someone has to go first. I know this is posted early, but if someone copies my mesh, I'll take it as a compliment :-D Thank you Cafe Admin for putting on this challenge, and good luck to everyone!
  24. Glad to read about the deadline extension. I have my hook modeled, just need to make some final tweaks, then texture. but I had some deadlines tighten up on me randomly. :)
  25. I've come across a few scenarios where Vector Math would be handy, but I don't know much about the topic. Must have forgotten that bit of high school trigonometry class!

Latest Topics

Latest Comments

×
×
  • Create New...