Jump to content

pfistar

New Member
  • Content Count

    20
  • Joined

  • Last visited

Community Reputation

0 Poor

Contact Methods

  • Website URL
    www.nicholasfischer.com

Profile Information

  • First Name
    Nik
  • Last Name
    Pfistar
  • C4D Ver
    R18.048 Studio
  1. My guess was that it might depend on farm's particular setup, but many thanks for the response and for lending a little more clarity! -NpF
  2. Greetings to Hrvatska, from Brooklyn. Thanks for the fast response - and for the clarifications! It's a pity about Xpresso keys not showing in powerslider, aspecially for non-dual-monitor types like me. Best, Nik
  3. Greetings all, Hoping someone can clarify a few things regarding the MoGraph cache feature, particularly as it would pertain to network or farm rendering. I understand the difference between caching to RAM and caching to .mog file, but if I have some Cloners cached to RAM, will the C4D file hold onto that cache info, or will I have to run a re-cache if I quit and re-open C4D? Regarding the use of .mog files and remote render farms, is there any standard nomenclature for the folder name I should save the .mog sequence to, or does this tend vary from service to service? Furthermore, is it generally better practice to render out the .mog file for farm rendering, or does simply rendering the cache to RAM/C4D file tend to suffice? One more related question: would I see any difference in CPU render-time between caching to RAM/file vs. caching to disk? Many thanks, NpF
  4. Greetings all, Something of a newbie question that's been bugging me for a few weeks now. I'm wondering if there's a way to get keyframes created in Xpresso to show up in the main timeline (below the viewport). I find it a bit of a hindrance to have to open up the F-Curve or Dopesheet to move keys around everytime I need to make an adjustment. Many thanks! NpF
  5. Just found my way back here as this particular came up in my workflow and realized I never responded to you, so sending apologies for absent-mindedness which I hope didn't come off as rudeness. In any event looking at your words above and nodding in agreement - many thanks for helping me make more sense of this obscure but useful product feature! Cheers, Nik
  6. @Jed, Many thanks. I appreciate the explanation! I think this clarifies quite a bit.
  7. Wow, this is great - thanks very much for pointing me in this direction! As I mentioned, I'm quite new to Xpresso (but I should mention I have been studying Houdini for the past year or so, and the approach is somewhat similar). Looking at the way you amended my file, there are still a few things that are a little mysterious to me: So the tag that contains the main setup (Iteration > LinkList > Object > PMatterWaves, etc) has to somehow reference the "Global Velocity" data on each of the spheres, so based on your setup, I would assume the Global Velocity data gets assigned to the sphere object itself through their individual Xpresso tags, and this data gets collected in the LinkList node? I'm a little confused about what exactly the "Tag" operator node and the one next to it called "Xpresso" are actually doing (see attached jpg). In the "Xpresso" node, it appears we're referencing the tag of the first object in the LinkList and calling on it's "Global Velocity" user data? Also, is there any danger in changing the names of the Xpresso tags? In other words, are the names read as string values, or are all references absolute when your working in Xpresso? Sorry for the continued naive questions - I'm trying to wrap my head around how this system works. Many thanks again, Nik
  8. srek, Thank you again for the suggestion! I appear to have figured things out as far as creating an iteration group and linking it to the emitter node so that particles are emitted across all objects at once! Attaching a new file to show. A couple follow up questions, if you would be so generous: I'd love to see the emitted particles inherit some velocity from the emitter geometry so that rather than leaving an immediate trail, they explode outward a bit from the spheres before drifting away. I've approximated the effect (badly) by keyframing a few of the emitter's parameters, but this is of course less than ideal. Here's something from an old post of yours: This makes a lot of sense to me in theory, though I can't seem to make it work in reality (please see my file. Though you're talking about PStrom I don't see how the same wouldn't apply to PMatterWaves). I can't get a green wire when trying to connect my Math node to the PSetData node. I'd guess this is what would happen when you try to connect incompatible data types, but I've set the mode to 'vector' as suggested. I'd also wonder where the Position Velocity is calculated from on the Emitter object (is it per point, or poly, or from the world position of each new particle at its frame of birth?, etc.) Also, a tangental question that's probably a noob question: Is there a way to enable keyframes created in Xpresso nodes to show up in the timeline under my viewport, the way most keys do? Thank you again, Nik XPresso-Iterator-v02.c4d
  9. Many thanks for the response, srek ! While I understand the concept of iteration, I don't have much experience at all with Xpresso so it's still very much a foreign language to me. I looked at this tutorial https://www.youtube.com/watch?v=HT2T9P_tpQo and built the simple scene it demonstrates, however, I am totally lost when it comes to understanding where and how to link the particle generation node (in this case PMatterWaves) into the geometries that I'd want to use as my emitter. In the attached scene, I've dragged the Cube node which I'm running Iteration on into the Object slot of the particle generator, though it appears to yield no results. I'm sure I'm missing a step or two in the process. Thanks again, Nik XPresso-Iterator.c4d
  10. Greetings all, I have a set (9 or 10) of simple spheres or "planets" orbiting around the origin, using a bunch of transformed nulls that drive the animation. I'd like to have each sphere emit some particles, which eventually drift toward the center, but I'd like to save my self the labor of having to copy/paste all the objects, tags and ThinkingParticles nodes to each planet. I would think there might be a way to bake the whole thing to a singular cached geo which could be used as a single emitter surface (vs. making having to set up each single planet as its own emitter). I tried using Timeline > Functions > Bake Objects, though this appears to only create a keyframe for each animated transform track (which in this case, are null objects only), but doesn't actually record the mesh data Character > Point Cache only appears to work on mesh deformation at the object level, but not at the world level, in other words, its parameters are unavailable when the tag is applied to a null object. I've considered turning the whole rig into a MoGraph object and baking that, though I'm not sure whether that would work as I don't know whether MoGraph Bake actually only caches the template points, or whether it also can cache the instanced objects attached to the template points. I've also tried various export / import formats (abc., fbx., dae., etc) but could not find a way to a solution. Hoping someone might have a tip or two - I'm using R18 Many thanks! Nik
  11. Hi DeCarlo Many thanks for your response. I was able to solve my problem using a MoGraph solution, on the advice from another forum (thanks Luke Letellier - if you happen to be on this forum.) By simply parenting all of my geo objects under a MoGraph Fracture, I'm able to treat the array of objects as I would the clones in a MoGraph Cloner. After the parenting, I apply a Color Shader to the Alpha property of each material, and then apply an animated Plain Effector to the Fracture which takes care of turning on the alpha opacity for each object as the Plain Effector moves along. In addition to the Plain Effector, I apply a Random Weight Effector to the Fracture and this takes care of randomizing the total effect.
  12. Greetings all, I'm posting this question on more than one thread as I have a situation that's not quite a newby-question, though hoping to get some advice or guidance on something specific that I imagine may require an Expresso setup, though perhaps it’s possible to achieve by other means. What I’d like to do is animate the visibility (alpha value, but not the Transparency property) of a group of 50 or so discreet geometry objects so that they go from fully invisible to fully visible one by one. Each object should fade from 0 to 100% opacity over the course of about 6 frames, though randomizing slightly this per object would be optimal (some could take 4 frames, some could take 8 frames, etc.) I’d want to make the fade-on time of each object based on its y-position value, so that objects at the bottom of the pile appear first and objects at the top appear last The total animation will need to be about 4 seconds, so at 0:00, none of the objects are visible, but at 4:00 all objects are visible The objects are already placed and I don’t have much flexibility to change this The 50 objects don’t all have the same material. There are 3 discreet materials applied across the 50 objects (but none of the objects has more than a single material applied to it.) The brut-force way to do this would obviously be to set up more materials than I already have, apply them to the appropriate objects and manually animate the Alpha property of each material. Kinda trying to avoid this, since object count is fairly high.I imagine this could be approached something like the following ( though don’t have much experience with Expresso or other scripting to know if this is a wise approach). Get the points of an Array object or MoGraph Cloner to conform to coordinates of the pre-placed set of geo objects Use some animatable property of the Cloner to drive the visibility or alpha property of each object. Any advice here would be massively appreciated!-NpF
  13. @ABMotion - I really appreciate your reply and thanks for the tip! Cranking the scale setting way down did the trick. From MAXON's documentation: It's not obvious to me how this works - seems counter-intuitive to me that a scene who's dimensions are small (like Chad's coffee-bean scene) would warrant a larger scale setting, though I'd guess it's like this: In the case of using the Camera Space setting, I'd suppose that the luminance value of each pixel is rendered based on distance from the camera's node point. I guess the Scale value multiplies the distance span, yielding a more visible gradient by compressing the span of the gradient into a smaller distance, so smaller number would yield an apparent shorter span. Or is the opposite true? Does the smaller number spread the distance of the gradient outward rather than inward? Also, a hypothetical question: is this Scale setting an absolute setting, or is it relative to the scene's units scale? Just thinking out loud here, so don't feel obligated to respond unless you feel like it. In any event, thank you again! NpF
  14. Hello all! Trying to up my compositing game and followed Chad from Greyscale Gorilla's otherwise excellent tutorial for getting proper motion and vector blur in an AE composite. https://greyscalegorilla.com/tutorials/your-depth-pass-is-wrong/ For non-artifacting edges (glowing/flaring) in a depth blur, the tutorial suggests using a non-anti-aliased depth pass, but rather than rendering this as a separate pass, the suggestion is to render a PostEffect/Position pass and extract the blue/Z channel from it to generate a non-anti-aliased depth pass, since the Position pass won't pick up whatever you have setup in your anti-aliasing-settings. I've set up a test scene to mimic Chad's file though my Position pass looks nothing like it ought to. Rather than resembling a depth pass, like it should, what I'm getting instead are some large color fields for each frame (see attached .exr file). Trouble-shooting options I've tried thus far: I assumed it might have to do with Position Pass's scale setting, though I tried some incrementations from a scale of 0.1 to a scale of 100, and none of those made any difference. I also tried switching between Standard and Physical renderer though that made no difference Finally, I gave a few of the OpenEXR file output options a try though this also made no difference. The setting suggested by the tutorial is "Lossy, 16 bit float, zip, in blocks of 16 scan lines". I'm a bit stumped, I don't believe there's anything obstructing the camera, but there must be some setting somewhere that I've missed. I should add that I'm somewhat new to working with OpenEXR files. Hoping someone can point me in the right direction. I'm attaching a my C4D file as well as single frame from the Position Pass. Many thanks!! NF zdepth-blur-test.c4d zdepth-blur-test_positionpass_1_0000.exr
  15. Hi CBR Thank you for the response. I've updated my profile to show current version. I took you advice and copy/pasted my all of my scene objects to a new scene, then set up a new set of takes from scratch. That worked - at least for a little while, though eventually I experienced the phenomena again (non-stick keyframes) but fortunately only occasionally. I'm still on alert for its cause, and still can't say for sure there's a definite correlation between keyframes being non-adherent and the presence of Takes. If I figure it out I'll be sure to post something. Thanks again for the reply, NpF

YOUTUBE CHANNEL:

ABOUT US:

C4D Cafe is the largest CINEMA 4D community. We provide facilities for discussion, showcasing and learning our favorite software :)
×
×
  • Create New...