Jump to content

Fritz

Maxon
  • Content Count

    54
  • Joined

  • Last visited

Everything posted by Fritz

  1. Hi bwhitz, The knife tool was completely rewritten for S22 and it being a very complicated tool it unfortunately can happen that new issues are introduced. We are aware of this and are looking into it. Regards Fritz
  2. Hi Flonoel, the setting that is preventing this from working is the "deactivation" that kicks in, because your objects are stationary for so long. Go to the dynamics tag on the Voronoi Fracture : Dynamics > Deactivation > Linear/Angular Velocity Threshold. Set both to 0 to deactivate the deactivation :D. Regards Fritz
  3. Haven't tried it as I don't own fume fx, but this sounds like a job for "connect object". Try putting the voronoi in a connect generator and place that as source.
  4. Hi GaryAbrehart, The Volume Builder is converting your mesh into a volume data structure that is cubic in memory consumption and complexity. So lowering the voxel size by half will slow it down by a factor of 2^3 = 8. You should find the correct use cases for it because this condition can limit it for some things that it seems to be useful for, but practically isn't. If you feel like you need a very low voxel size, then it might not be the correct tool for your task. As soon as the objects were converted to a volume, it doesn't matter for the rest of the operations how complex the input object was. The conversion is most of the time a smaller part of the complete computation time. Most parts of the Volume workflow are parallelized so a high core count CPU with enough memory help a lot. This is without question a tool that needs the right hardware to work productively with. Regards Fritz
  5. Hallo DasFrodo, You can just drag the volumebuilder into the fiel list in the field force. It helps to either add a normalize layer in the volumebuilder or to normalize in the field list in the direction tab if you activate remapping. The default volume vectors are quite short. Regards Fritz
  6. I wouldn't either. Except you really really need the extra memory.
  7. Hi TreeW, I assume you exported the meshed volume to an alembic (so a polygon mesh), so backwards compatibility based on volumes not being in R19 should not be an issue. The alembic version might be newer in R20 and causes problems in R19. Is your memory really filled up or does he just claim you are out of memory ? Regards Fritz
  8. Effectors have an option called "visibility" which let's you control exactly that through any effector.
  9. Using the connect object offers options on how to merge different "Phong tag" settings. After that you can make it editable.
  10. Hi ToDo, I am afraid you will need to write a proper python plugin to be able to do that. As far as I know neither python tag nor python generator can implement the draw function. https://forums.cgsociety.org/t/a-few-python-generator-spline-primitives-wip/1551988/7 Here somebody explains how to write a python plugin. Regards Fritz
  11. Looks like you are missing a Phong tag.
  12. If you are on R19 SP2, try upgrading to SP3. There was a bug introduced with SP2 that had wrong results on the illustrator import if you had a "join spline" option on. Not sure what the exact name of the option was.
  13. Hi asiejenski, put a material on the first version and decrease voxelsize and I would say you are pretty close. You won't get the same, because Arnold is volume rendering the data, which none of the inbuild renderers support. You could try to add an SSS material which will come closer to the soft look of volume rendering. Regards Fritz
  14. Hi Visionlux, Yes. The freeze layer has several grow options and also a subfield to control the look of the grow. Regards Fritz
  15. Hi TimW, I am not sure, but I believe the statue is in the content library. Make sure you are on R19 SP2. There were some important fixes for detailing in that version. Nothing for your issue however. Regard Fritz
  16. Hi Tim, the detailing option can be a bit challenged with higher poly meshes. The issue here is, that the original mesh created my the breaking algorithm on the inside faces isn't very homogeneous in his tesselation. You can see that by turning off detailing and turning off the ngon Creation in the "object" tab. What would help is to reduce the "maximum edge length" in the detailing, but that increases complexity and it will take a while to compute. Even that will not fully solve the issue. Regards Fritz
  17. That is R19 SP2. Do you use the same version on your work computer? Can you post a simplified scene? I could then have a look on Monday if I can reproduce it.
  18. As soon as you render it, the displacer in emulation mode is turned off and what you see is just the material displacement. Are you maybe on 2 different versions? Material layering for the emulation mode was broken and I fixed it for R19 SP2.
  19. Hi carlosdivega, I can't reproduce the camera issues. It always renders the camera that is selected for all renderers (the quare in the object manager is white). If you don't have any camera selected it will render the top left view. For the hardware OpenGL renderer it is normal to render all the helping lines, but for Prorender it is not. Infact I would say it is impossible that ProRender does this and something seems to fail and it falls back to the hardware OpenGL renderer instead. No other renderer can display the handles and helping lines. Regards Fritz
  20. Hi Fauntail, If you are using a third party renderer, your contact with the inbuild materials might be very slim. Areas where I would think it is still useful: Shader Effector / Displace deformer With these you can sample shaders and materials in the scene and it is often useful to do this through a material (with texture tag), so you can better control the projection of the shader. For many cases however it is enough to use the "Custom Shader" slot directly. Non Photorealistic Rendering / Flexibility While most 3rd party renderers are great at photorealistic results with global illumination, they don't shine as bright with non photorealistic effects like what the inbuild Sketch&Toon tools can do. In general, if you want flexibility in your output and not only the nice photorealistic shiny balls, standard & physical can be better then many other renderers. Always the right tool for the job and sometimes this tool is one of the inbuild renderers. If you are only doing for example photorealistic product shots, this might not be important for you. Regards Fritz
  21. Hi smoresahoy, you can have the cloner set to "Blend" in the "Clones" setting and clone one light with 0% intensity and one with 100% intensity. With the shader effector you can load a texturetag (don't think octane textures work, so a dummy cinema texture would work) or a shader directly to control the blending. For that you need to increase the "Modify Clone" setting in the "Parameter" tab of the shader effector. This lets you control the blending of the clones between the one with 0% intensity and 100% intensity. Regards Fritz
  22. Hi dbassett, are you using the "Render Instance" mode in the cloner? Maybe try playing aroung with these modes. The scene looks like you could use render instances without downsides and they render faster and are sometimes less errorprone with these kinds of situations. Regards Fritz
  23. You could also try to describe your problem a bit more in detail, so we can try to help you. Are you using the "Hardware OpenGL" renderer or ProRender? Is it not using the correct camera as in position/orientation, or is it not looking like a clear beauty version of the scene?
  24. Hi carlosdivega, I am not completly certain what you mean with "camera used" but I'll try to guess what your issue is. OpenGL is not the same as OpenCL. OpenGL (G stands for Graphics) the graphics API you usually see being used for realtime applications (like games). It is being used to rasterize geometry to create an image and the "3D accelerated" GPUs we all have in our computing boxes are optimized to do exactly that. OpenCL (C stands for Computing) is a more general API that you can use to do any calculations you please on the GPU. This is the API that is being used in ProRender to do raytracing on the GPU. Raytracing is another way of creating an image from a scene. The "Hardware OpenGL" renderer is basicaly the viewport with some extra features. This also means it reacts to the filters you set in the viewport! It is ment for preview renderings and can look quite nice with the correct settings. The "Software OpenGL" should do the same as the "Hardware OpenGL" renderer, but is doing it without using the GPU. There are some rare usecases for that and I would assume most users can ignore that renderer. ProRender on the other hand should not show any gizmoz in the result if you use the external renderer to render to the picture viewer. I tested your scene and switched through the renderers and everything seems to render the active camera with the correct renderer. Regards Fritz
  25. I am pretty sure I didn't understand what you mean, but maybe this helps. xpressodrivingrotations.c4d
×
×
  • Create New...