Jump to content


New Member
  • Content Count

  • Joined

  • Last visited

Everything posted by pfistar

  1. Reposting here a previous post by contrafibbularities There's a free script called "AngleSelect" by Geespot you may find useful: http://forums.cgsociety.org/showpost.php?p=3963058&postcount=17 You need to be registered at CGTalk to download it.
  2. Greetings all, I've been playing around with deforming a geo surface with ThinkingParticles coupled with the Proximal shader in the Displacer object. It's working ok for the most part, but a finer point is alluding me a little - namely that a portion of the particles are not sticking properly to the surface. https://www.dropbox.com/s/5tzeqmsfe7ckxbo/proximal-shader-particles-events_v03.c4d?dl=0 In the linked scene above, there's a hero Sphere who's surface is being deformed by the particles that land on it, by way of a TP Deflector node (using said Sphere as deflector geometry) and a Displacer object using a Proximal shader with the TP Particles as proximal's reference. This works well in general, but as the displacement accumulates as more particles hit the surface, some of those particles escape, or rather break past the mesh after they've stuck to it for a frame or 2. I'd guess this has to do with a threshold normal angle of whatever polygon each particle is hitting. I appended a PFreeze node off of the Event output on the PDeflector node, but this doesn't quite do the trick. I'm presuming what this should do is bring any particle's position to a halt at the exact frame it reaches the deflector surface, but perhaps this is incorrect?
  3. Hi Hrvoje - Thanks for the suggestion. I just gave it a look though I don't see how that would work since the gizmos in the viewport represent nodes that only seem to exist as objects within Xpresso. Unless there's some secret I don't know, I'm not certain there is a way to access those Xpresso nodes as "objects" via the Object Manager, other than by clicking on the Xpresso tag to open the Xpresso window.
  4. Hi all, An esoteric but simple question: Anybody know of way to make all Xpresso gizmos invisible in the viewport? In my specific case, I have a few XParticles Wind nodes in the scene, and each has its large plane-&-arrow gizmo to represent the wind's direction. This is all well and good, though I like sending clients hardware playblasts and it would be nice to not have those extra distractions. I tried all options in the Display panel and pined through Preferences as well but couldn't find the right check-box. Many thanks NpF
  5. Thanks very much natevplas for the quick response! Exactly what I was looking for!
  6. Hi all, Wondering if anyone knows a way to override the Up Vector of clones on a surface. In short, I have a human cell model that's surface is animated using Displacer and Random deformers. I need to scatter some small receptor structures on its surface, which I'm doing using Cloner's Surface Distribution function. The position of the clones follow the animated surface, which is what I want, however the clones also inherit the normal direction of the surface (polygon, or point, I'm not sure which) which is something I don't want. Rather, I'd like the normal direction to be inherited from the center point of the object itself, so that the clones' positions follow the surface, but they always face directly outward, rather than following the vector of the nearest poly or point. I'm guessing this might be a job for one of the MoGraph Effectors, or alternatively there's some Xpresso-based solution, but I haven't figured it out. Also: Is there a way to make a sub-selection of polygons on the cell object, after I've applied the Displacer Deformer and Random Effector. "Active selection" doesn't seem to be available after these are applied to the base mesh. Thanks ahead of time to anyone with tips! NpF receptors_minimum.c4d
  7. My guess was that it might depend on farm's particular setup, but many thanks for the response and for lending a little more clarity! -NpF
  8. Greetings to Hrvatska, from Brooklyn. Thanks for the fast response - and for the clarifications! 'Tis a pity Xpresso keys can't be shown in powerslider, especially for non-dual-monitor types like me. Best, Nik
  9. Greetings all, Hoping someone can clarify a few things regarding the MoGraph cache feature, particularly as it would pertain to network or farm rendering. I understand the difference between caching to RAM and caching to .mog file, but if I have some Cloners cached to RAM, will the C4D file hold onto that cache info, or will I have to run a re-cache if I quit and re-open C4D? Regarding the use of .mog files and remote render farms, is there any standard nomenclature for the folder name I should save the .mog sequence to, or does this tend vary from service to service? Furthermore, is it generally better practice to render out the .mog file for farm rendering, or does simply rendering the cache to RAM/C4D file tend to suffice? One more related question: would I see any difference in CPU render-time between caching to RAM/file vs. caching to disk? Many thanks, NpF
  10. Greetings all, Something of a newbie question that's been bugging me for a few weeks now. I'm wondering if there's a way to get keyframes created in Xpresso to show up in the main timeline (below the viewport). I find it a bit of a hindrance to have to open up the F-Curve or Dopesheet to move keys around everytime I need to make an adjustment. Many thanks! NpF
  11. Just found my way back here as this particular came up in my workflow and realized I never responded to you, so sending apologies for absent-mindedness which I hope didn't come off as rudeness. In any event looking at your words above and nodding in agreement - many thanks for helping me make more sense of this obscure but useful product feature! Cheers, Nik
  12. @Jed, Many thanks. I appreciate the explanation! I think this clarifies quite a bit.
  13. Wow, this is great - thanks very much for pointing me in this direction! As I mentioned, I'm quite new to Xpresso (but I should mention I have been studying Houdini for the past year or so, and the approach is somewhat similar). Looking at the way you amended my file, there are still a few things that are a little mysterious to me: So the tag that contains the main setup (Iteration > LinkList > Object > PMatterWaves, etc) has to somehow reference the "Global Velocity" data on each of the spheres, so based on your setup, I would assume the Global Velocity data gets assigned to the sphere object itself through their individual Xpresso tags, and this data gets collected in the LinkList node? I'm a little confused about what exactly the "Tag" operator node and the one next to it called "Xpresso" are actually doing (see attached jpg). In the "Xpresso" node, it appears we're referencing the tag of the first object in the LinkList and calling on it's "Global Velocity" user data? Also, is there any danger in changing the names of the Xpresso tags? In other words, are the names read as string values, or are all references absolute when your working in Xpresso? Sorry for the continued naive questions - I'm trying to wrap my head around how this system works. Many thanks again, Nik
  14. srek, Thank you again for the suggestion! I appear to have figured things out as far as creating an iteration group and linking it to the emitter node so that particles are emitted across all objects at once! Attaching a new file to show. A couple follow up questions, if you would be so generous: I'd love to see the emitted particles inherit some velocity from the emitter geometry so that rather than leaving an immediate trail, they explode outward a bit from the spheres before drifting away. I've approximated the effect (badly) by keyframing a few of the emitter's parameters, but this is of course less than ideal. Here's something from an old post of yours: This makes a lot of sense to me in theory, though I can't seem to make it work in reality (please see my file. Though you're talking about PStrom I don't see how the same wouldn't apply to PMatterWaves). I can't get a green wire when trying to connect my Math node to the PSetData node. I'd guess this is what would happen when you try to connect incompatible data types, but I've set the mode to 'vector' as suggested. I'd also wonder where the Position Velocity is calculated from on the Emitter object (is it per point, or poly, or from the world position of each new particle at its frame of birth?, etc.) Also, a tangental question that's probably a noob question: Is there a way to enable keyframes created in Xpresso nodes to show up in the timeline under my viewport, the way most keys do? Thank you again, Nik XPresso-Iterator-v02.c4d
  15. Many thanks for the response, srek ! While I understand the concept of iteration, I don't have much experience at all with Xpresso so it's still very much a foreign language to me. I looked at this tutorial https://www.youtube.com/watch?v=HT2T9P_tpQo and built the simple scene it demonstrates, however, I am totally lost when it comes to understanding where and how to link the particle generation node (in this case PMatterWaves) into the geometries that I'd want to use as my emitter. In the attached scene, I've dragged the Cube node which I'm running Iteration on into the Object slot of the particle generator, though it appears to yield no results. I'm sure I'm missing a step or two in the process. Thanks again, Nik XPresso-Iterator.c4d
  16. Greetings all, I have a set (9 or 10) of simple spheres or "planets" orbiting around the origin, using a bunch of transformed nulls that drive the animation. I'd like to have each sphere emit some particles, which eventually drift toward the center, but I'd like to save my self the labor of having to copy/paste all the objects, tags and ThinkingParticles nodes to each planet. I would think there might be a way to bake the whole thing to a singular cached geo which could be used as a single emitter surface (vs. making having to set up each single planet as its own emitter). I tried using Timeline > Functions > Bake Objects, though this appears to only create a keyframe for each animated transform track (which in this case, are null objects only), but doesn't actually record the mesh data Character > Point Cache only appears to work on mesh deformation at the object level, but not at the world level, in other words, its parameters are unavailable when the tag is applied to a null object. I've considered turning the whole rig into a MoGraph object and baking that, though I'm not sure whether that would work as I don't know whether MoGraph Bake actually only caches the template points, or whether it also can cache the instanced objects attached to the template points. I've also tried various export / import formats (abc., fbx., dae., etc) but could not find a way to a solution. Hoping someone might have a tip or two - I'm using R18 Many thanks! Nik
  17. Hi DeCarlo Many thanks for your response. I was able to solve my problem using a MoGraph solution, on the advice from another forum (thanks Luke Letellier - if you happen to be on this forum.) By simply parenting all of my geo objects under a MoGraph Fracture, I'm able to treat the array of objects as I would the clones in a MoGraph Cloner. After the parenting, I apply a Color Shader to the Alpha property of each material, and then apply an animated Plain Effector to the Fracture which takes care of turning on the alpha opacity for each object as the Plain Effector moves along. In addition to the Plain Effector, I apply a Random Weight Effector to the Fracture and this takes care of randomizing the total effect.
  18. Greetings all, I'm posting this question on more than one thread as I have a situation that's not quite a newby-question, though hoping to get some advice or guidance on something specific that I imagine may require an Expresso setup, though perhaps it’s possible to achieve by other means. What I’d like to do is animate the visibility (alpha value, but not the Transparency property) of a group of 50 or so discreet geometry objects so that they go from fully invisible to fully visible one by one. Each object should fade from 0 to 100% opacity over the course of about 6 frames, though randomizing slightly this per object would be optimal (some could take 4 frames, some could take 8 frames, etc.) I’d want to make the fade-on time of each object based on its y-position value, so that objects at the bottom of the pile appear first and objects at the top appear last The total animation will need to be about 4 seconds, so at 0:00, none of the objects are visible, but at 4:00 all objects are visible The objects are already placed and I don’t have much flexibility to change this The 50 objects don’t all have the same material. There are 3 discreet materials applied across the 50 objects (but none of the objects has more than a single material applied to it.) The brut-force way to do this would obviously be to set up more materials than I already have, apply them to the appropriate objects and manually animate the Alpha property of each material. Kinda trying to avoid this, since object count is fairly high.I imagine this could be approached something like the following ( though don’t have much experience with Expresso or other scripting to know if this is a wise approach). Get the points of an Array object or MoGraph Cloner to conform to coordinates of the pre-placed set of geo objects Use some animatable property of the Cloner to drive the visibility or alpha property of each object. Any advice here would be massively appreciated!-NpF
  19. @ABMotion - I really appreciate your reply and thanks for the tip! Cranking the scale setting way down did the trick. From MAXON's documentation: It's not obvious to me how this works - seems counter-intuitive to me that a scene who's dimensions are small (like Chad's coffee-bean scene) would warrant a larger scale setting, though I'd guess it's like this: In the case of using the Camera Space setting, I'd suppose that the luminance value of each pixel is rendered based on distance from the camera's node point. I guess the Scale value multiplies the distance span, yielding a more visible gradient by compressing the span of the gradient into a smaller distance, so smaller number would yield an apparent shorter span. Or is the opposite true? Does the smaller number spread the distance of the gradient outward rather than inward? Also, a hypothetical question: is this Scale setting an absolute setting, or is it relative to the scene's units scale? Just thinking out loud here, so don't feel obligated to respond unless you feel like it. In any event, thank you again! NpF
  20. Hello all! Trying to up my compositing game and followed Chad from Greyscale Gorilla's otherwise excellent tutorial for getting proper motion and vector blur in an AE composite. https://greyscalegorilla.com/tutorials/your-depth-pass-is-wrong/ For non-artifacting edges (glowing/flaring) in a depth blur, the tutorial suggests using a non-anti-aliased depth pass, but rather than rendering this as a separate pass, the suggestion is to render a PostEffect/Position pass and extract the blue/Z channel from it to generate a non-anti-aliased depth pass, since the Position pass won't pick up whatever you have setup in your anti-aliasing-settings. I've set up a test scene to mimic Chad's file though my Position pass looks nothing like it ought to. Rather than resembling a depth pass, like it should, what I'm getting instead are some large color fields for each frame (see attached .exr file). Trouble-shooting options I've tried thus far: I assumed it might have to do with Position Pass's scale setting, though I tried some incrementations from a scale of 0.1 to a scale of 100, and none of those made any difference. I also tried switching between Standard and Physical renderer though that made no difference Finally, I gave a few of the OpenEXR file output options a try though this also made no difference. The setting suggested by the tutorial is "Lossy, 16 bit float, zip, in blocks of 16 scan lines". I'm a bit stumped, I don't believe there's anything obstructing the camera, but there must be some setting somewhere that I've missed. I should add that I'm somewhat new to working with OpenEXR files. Hoping someone can point me in the right direction. I'm attaching a my C4D file as well as single frame from the Position Pass. Many thanks!! NF zdepth-blur-test.c4d zdepth-blur-test_positionpass_1_0000.exr
  21. Hi CBR Thank you for the response. I've updated my profile to show current version. I took you advice and copy/pasted my all of my scene objects to a new scene, then set up a new set of takes from scratch. That worked - at least for a little while, though eventually I experienced the phenomena again (non-stick keyframes) but fortunately only occasionally. I'm still on alert for its cause, and still can't say for sure there's a definite correlation between keyframes being non-adherent and the presence of Takes. If I figure it out I'll be sure to post something. Thanks again for the reply, NpF
  22. Help! Keyframes won't stick to timeline. Very frustrating! Greetings all! I’ve encountered this problem before, but it was months ago so I don’t recall what the fix was. I have to imagine that there’s a single hidden function somewhere in the UI that I’ve activated or deactivated by accident. In any event, I’m trying to animate a camera by - selecting it in the Object Manager window, -making it the active camera in the viewport, - Moving the timeline marker to the desired frame - setting to red the circular toggle buttons to the immediate left of position and rotate fields - Moving timeline marker to the next desired frameUsing the viewport controls to adjust the position and rotation of the camera - Setting again to red the circular toggle buttons on position and rotate This workflow was working well for me for several days on my current project. Now, out of the blue, this is what happens when I try the same order of operations: -Select camera object and make it active viewport -Move the camera and timeline into 1st desired place -Set keyframe using toggle buttons - toggle buttons turn red -Move the camera and timeline into 2nd desired place -Set keyframe using toggle buttons - toggle buttons turn red -when I scrub the timeline backwards to check the motion, the keyframe toggles turn yellow, -the position/rotation values at 2nd key remainthe position/rotation values at 1st key are now the same as 2nd key - I should mention that I’ve created many Takes within my scene, as I suspect there’s a chance this might be related to my problem. However, I’ve taken care to always set the Main Take as the active Take when creating and animating new cameras within the scene. I’ll also mention that I’ve tried to replicate the problem in a brand new scene file, though when I go through the steps above, I have no problem getting the keyframe values to adhere wherever I set them in the timeline. Any help is hugely appreciated! Thanks and best,NpF



C4D Cafe is the largest CINEMA 4D community. We provide facilities for discussion, showcasing and learning our favorite software :) Register now to gain access to all of our features. Once registered and logged in, you will be able to create topics, post replies to existing threads, get your own private messenger, post status updates, manage your profile and much more. If you need to find solution to your problem or otherwise ask for help, Cafe is the right place.
  • Create New...