Jump to content

Leaderboard


Popular Content

Showing content with the highest reputation since 07/29/2020 in Posts

  1. 12 points
    If I may, I love Cinema 4D and my intent with my Avatar is not to profess fanatical devotion to Blender. Now, Blender is a good program. It's users do have a lot to be happy about. I did try it when the interface improved with 2.8. But despite those improvements, it is still a bit clunky. Everything is there and for the most part stable, but (IMHO) the UI unnecessarily gets in the way of fully enjoying the program. In short, C4D is a lot more fun to use! So why do I have that Avatar? To serve as a reminder to MAXON's leaders should they troll the site that the hobbyist community does have a pretty valid option other than C4D for our CGI fix. Yes, we could follow the subscription plan, but we are hobbyists....we use C4D for the love of it and not as part of a business model. As for me, should my personal financial circumstances change such that I can no longer afford to stay current with the program, I still want to have something that works and not watch years of work go away when my subscription turns off. You can ONLY keep C4D on with a permanent license and those costs have increased significantly (from $620/year in 2017 for the Studio MSA to ~$950/license upgrade). So my avatar is really part of that old argument of subscription vs. permanent licenses. I am a hobbyist that wants permanent licenses as long as it remains affordable for a hobbyist. Blender as an open source program will always be affordable. Blender gives us options....a reminder for the MAXON employees and CEO who visit the site and not intended to disrespect its members. Dave
  2. 5 points
    I just wish MAXON would make an Indie version of C4D that's a bit more affordable for hobbyists. The affordable cut down editions were great for that (Mograph edition had enough features for me while being significantly cheaper than Studio), but they have been eliminated.
  3. 4 points
    I really wish they do consider that in the future, an indie version of C4D. Specially now with Maya Indie arriving worldwide to compete with Blender, MAXON needs to start thinking more of its indie users.
  4. 3 points
    I started on this awhile back, collecting texture maps from NASA and wanting to add another educational project to my portfolio. I reloaded it with text instead of my lame narration. Sorry for those who had to hear that!
  5. 3 points
    Thanks for mentioning the RingLoop plugin @Cerbera @thanuleeOnly wanted to point out that I fixed a bug in the "Ring" functionality, and redesigned the plugin to provide for separate "Ring" and "Loop" commands (on request of a user wanting to be able to assign separate shortcuts). Additionally, I have also made the "Skip" amount scriptable. Example scripts are included in the newest version 0.4 Again, you can assign shortcuts to any of these scripts, allowing for a speed up in workflow if you frequently require a particular skip-amount.
  6. 3 points
    Ringloop - The lovely free plugin on the very front page of this site kindly shared by @C4DS?! CBR
  7. 3 points
    That screenshot in the post is definitely not R7. Interface looks way to fresh imho. Okay, so others in the thread have already explained how texturing for game engines works. It's an entirely different beast to something that you would do in C4D or other 3D software. Unless you're using Triplanar Mapping, probably everything is going to need a UV map. Even a flat plane. A game engine simply does not have concepts like cubic or spherical mapping unless you specifically write a shader for it. I personally have not tried or used kbar's 4D Paint yet but that is mostly because I use Substance Painter / Substance Designer for my texturing workflow. These two tools are pretty much industry standard and they do everything you could ever need for game engine texturing and especially Painter is pretty easy to get into. Photoshop as texturing tool is simply outdated by now and has been for years. People definitely work on diffuse and other textures in it, but they certainly do not use it as a texturing tool. It's too tedious and lacks even basic features for the quality of texturing that is expected these days.
  8. 2 points
    Daesu, a Korean historical headress reserved for queens and crown princesses and worn at wedding ceremony. Inspired from @netflixph Kingdom series Rendered in Cycles4D. Interestingly, I have grown fond with the renderer
  9. 2 points
    Pixar or Walt Disney should hire you lol
  10. 2 points
    hmm... I know of a Delete without Children command, but actually I haven't seen a Copy without children. But since I haven't smuggled an ad for my Patreon into a post for a while, here's a script that duplicates the currently selected object and inserts the copy directly behind the original: import c4d from c4d import gui def main(): if op == None: return theClone = op.GetClone(c4d.COPYFLAGS_NO_HIERARCHY) theClone.InsertAfter(op) c4d.EventAdd() if __name__=='__main__': main() (Now waiting for someone to explain that there is such a copy already... ) ---------- Learn more about Python for C4D scripting: https://www.patreon.com/cairyn
  11. 2 points
    If your animation is based on transformation parameters like PSR they can be wired in XPresso relatively easily. Spline modifier in Range Mapper node can be used to create offset in animation. example
  12. 2 points
    Seeing the answers so far, I may be on the wrong track. Please bear with me, if I'm running into the wrong direction. As I understood the question, the goal is a polygon selection based on Fresnel. Now, for me in this case Fresnel is basically the angle between two vectors (actually I think, it's rather the change of the angle of a ray of light, when entering a material with different optical properties (still way simplified)), the vector defining the direction of the polygon (normal vector) and a camera ray. Now, this could be achieved with a bit of Python. But we shouldn't reinvent the wheel, but instead give some credits to Donovan Keith, who already used the Python tag to create the "CV-Parametric Selection Tag". The CV-Parametric Selection Tag provides us with means to select polygons based on different conditions. One of them being the direction a polygon is facing. This is actually already one pat of the equation, we are interested in, the polygon normal. Now, in order to answer the question correctly we'd need to take the camera ray which hits the polygon into account. But to make it a bit simpler (and also because it's probably not possible with this approach), I will only use the direction from object origin to camera position instead of the actual camera ray. Using these ingredients, I get the following: I doubt, I'd be allowed to upload a scene, because Donovan's tag is actually Cineversity content. Instead, here's how I set it up: A very small and simple Xpresso tag to feed the view direction from the camera into the CV-Selection tag: Data type for subtraction has to be Vector, the "Vector" input of the CV tag is the one from the "Facing" parameter group (see below). And finally the CV-Parametric Selection Tag: As said before, I utilize the "Facing" condition. With above setup it would select those polygons facing the camera with a certain tolerance- That'SA why the "Invert" checkbox is set. Playing with "Tolerance" you can change the amount of selected polygons. Depending on what you are after, you may also want to set "And Opposite" to also care for the backside. As said before, this is not a mathematically correct solution, but I think depending on the view parameters and an objects geometry, it may already be sufficient. And hopefully serves you as a starter for better solutions. Cheers Additional notes: Due to C4D's handling of Selection tags, Donovan's tag can be a bit finicky to use. Be a bit careful, save often and read the tag's docs carefully (are there any? I didn't check, but given the internal implications thee should be...). Hopefully, what MAXON just demonstrated with Neutron will enable us in future to achieve tings like this way easier.
  13. 2 points
    And if people keep doing that (sometimes more than once), and happily display both an unwillingness to put in the work themselves and an unwillingness to actually pay for professional help (not to mention a reputation and relationships of their own), the few who are willing to answer questions for free get less and less willing, until the forum is finally completely dead. People have no clue any more how forums are (ideally) working.
  14. 2 points
    Interesting.....as Neutron Man (or Neutrino Man)...is Srek wanted for going to fast? Did he exceed the speed of light? Will Srek be arrested by Albert Einstein? Dave
  15. 1 point
    Problem is getting the same random number twice. If that's not an issue then this works (click in viewport to update) - link_clones2.c4d also, one man's random is another man's 'I see a pattern'. Exclusive random numbers are possible in Python - like lottery results
  16. 1 point
    Found this issue myself when using single lights. From memory the lights are counted down from the top when single lights. So if you put in your 8th light and cant see it, then you can move it to the top of the object manager to see it and the last one on the list will turn off. Deck
  17. 1 point
    With c4d's (relatively) new fields you can build the effect yourself: Create a cloner set to grid to create your blocks, and then use a plain effector using the "visibility" parameter, and then drag your desired shape into the falloff / fields tab, and set to volume.
  18. 1 point
    Been watching Kdramas during Quarantine. Really gave me some ideas
  19. 1 point
    AutoCAD is not so bad at all, but I still couldn't handle it. It was my first CAD program, and it was like when you were bitten by a dog as a child and you were afraid of all dogs after that. I just don't like it I was surprised that no one had mentioned Solidworks here as a strong CAD software yet. What I particularly like about it is its compatibility and interaction with various software - almost any 3D application. If you need integration with multiphysics simulation software, that is LiveLink interface for COMSOL. Interaction with 3D scanning software - scan-to-Solidworks for Artec, FARO, etc.
  20. 1 point
    btw, did you tried different object as source for hair? Maybe you have simple sphere with enabled "Render Perfect" option which brake IRR. If it´s the case, simply turn off this option and increase segments if needed...
  21. 1 point
    You nailed it! That certainly get's the point across that the moons of mars really are not moons as we think of moons. More like captured asteroids as you say. So the bluish gradient captures our night sky as seen through our atmosphere, but how do you eliminate the flickering without having ridiculously high anti-aliasing values during rendering! Some stars just do not consistently render from frame to frame....even when the camera is not moving. I've tried everything (high AA settings, rendering out to twice the finished image size and then reducing in post) with only somewhat passable results (still not happy) --- even with static backgrounds. Admittedly, I have not done a space animation in quite some time (3 to 4 years), so maybe the denoising algorithms have improved significantly since then - but I would still like to hear how you did it. Dave
  22. 1 point
    I want to thank all of you in this thread, who have given me some type of input, or direction. Since posting, I have done quite a bit of research into this topic, and have found that it appears that, Substance Painter is indeed the modern de facto standard for doing exactly what I was asking about. However, being the crème de la crème comes with a price, in the form of training. Albeit, there are a plethora of training videos scattered across the internet dealing with Substance Painter, sadly the majority of those videos are at the Intermediate to Advanced level, even when their title states, Beginner, or Fundamental. Especially the training videos found right on Allegorithmic's Substance website. Nice videos, however very fast paced, and still more towards the Advanced stages. Having said, It looks like i will be posting more than my fare share of questions regarding the "marital relationship" between Cinema 4D, and Substance Painter, right here in, "Textures & UVs" in the near future. Once again, thank you everyone.
  23. 1 point
    Thanks for the responses. The LOD object works as expected. I didn't have to create different set of LOD Objects. I just had it parented to my current object and it worked.
  24. 1 point
    You could use a couple of effectors parented to your camera like the file and grab below, but just like using any effector this only hides the clones and therefore doesn't help with any speed increase. to do that you would be better off changing the floor shape ( or a false / hidden floor shape ) to similar triangle as camera view, obviously that would work for a still but probably not so much for moving camera. Deck c4d278_cull_objects_outside_camera_view_0001.c4d
  25. 1 point
    Maybe the LOD object is an option.
  26. 1 point
    Thanks for the help. I knew it was probably operator assholitus.
  27. 1 point
    Game Engines are great, but they are also very limiting to work with. Especially Lightmapping (which you kind of need in UE if you want photorealistic results, no idea how good RTX is by now) is a pain in the butt to work with and due to lack of previewing (like lower rez renders in Octane etc.) very time consuming. You are basically changing something that you are not 100% happy with, and then you wait for an hour for the light to compile. Of course there's also serious other drawbacks, such as limited polycounts (although not as much as it used to be) and that you just can't send a client an Unreal Engine project to have a look, since you need a powerful PC to run it. It just doesn't run sufficiently on Office PCs.
  28. 1 point
    Thanks guys. Yep - very familiar with the 7 key etc... although "7" has always seemed a wierdly random choice : ) My next question was going to be... is this scriptable? But I was wonderfully pre-empted. Thank you Cairyn. That little script is in use already and will be very helpful : ) So, as testimonials are an excellent form of advertising, let me say that if you're interested in learning Python for C4D use, Cairyn's course via Patreon is top quality and very highly recommended. I am a student - although somewhat behind on my homework : ) I very much doubt you'd find anything else even remotely comparable. Subscribe today! : )
  29. 1 point
    @MikeA thanks for the link - there's a ton of ideas there. Thang knows his stuff. After I made the model in post 1, it bugged me that although it worked, I didn't really understand why the red spinning part had to rotate at 2X the speed of the larger part, and even why the cylinders stayed in sync in the slots. I have managed to solve it with a bit of simple geometry if you're interested - turns out the size of the red part doesn't matter -
  30. 1 point
    Nicely done! I felt like I was at a kiosk in a science museum. Very professional and some good information. I also like how you showed the orbital path of the moons with their inherent wobble. Plus the stars rendered very well....no flickering (there could be a whole tutorial on how to render star backgrounds properly...with and without motion blur. It is not as trivial at task as you would think)! So very well executed. The only thing I questioned was the size comparison of Mar's moon's versus our Moon. Now there are pictures at the Nasa.gov site which do match pretty close to what you showed. But Earth's moon is 3475 Km versus Mar's moons at no more than 27 Km. If you were to actually make this to scale, it would look like this: Yeah...some artistic license needed to taken. Dave
  31. 1 point
  32. 1 point
    This is easy to make in any renderer. I shall describe how to do it for physical as this is not in the Corona Category. You just need a basic grey in the colour channel (pick it directly from any reference photo) a simple reflectance channel (roughness set to around 25% and reflection level at around 40%) with dielectric fresnel (PET) and a simple noise in the bump channel ! Find a real close-up of the texture and then go through the noise types until you find one that closely matches it and scale it correctly. Bear in mind that materials work in conjunction with lighting, so you will need some of that in the scene to see your bump properly. You also need something in the scene for reflections to reflect - most people use an HDRI on a sky object for this, optionally hidden from camera with a compositing tag. CBR
  33. 1 point
    There is a shader field and it works with the fresnel shader as well. Update is a bit limited though. You can set it to update each frame but when navigating on a fixed frame it won't update live. You can preview the effect by running the scene though.
  34. 1 point
    Load all of your textures into Photoshop, stack them as layers and convert them to a Smart object. Now you can scale, move, stretch or duplicate the Smart object to your likings and all textures remain untouched and aligned in the source file. You are nondestructive now and can add more textures to it also.
  35. 1 point
    Couldn't you just get the animation sorted out as keyframes, and then use a slider to advance / reverse the timeline ? Not that I have ever done that but presume it must be possible... CBR
  36. 1 point
    looking up a bit of biology is always helpfull. Human eyes are quite bad in resolution. all together from left to right over a field of view of over 120° we have 3.3 to 7 Million light cone receptors per eye (they are responsible for the day view. so basically that what we use when we look at a monitor). some of them are sensible for red other for Blue or Green. If they where evenly distributed we wouldn't need a HD monitor at all. Because evolution is smart (and had plenty of time) there is a small area right in the middle (where we look at). that has a much higher resolution of cones. this area is called Fovea centralis. it is round about 1.5 mm in diameter and contains roughly about 600.000 if these cone cells. This area translates into round a bout 5° of sharp field of view that humans have. Everything besides that gets unsharper and unsharper . therefor, If we combine the resolution of the two eyes, we get (just for the area we really see as sharp) round about 1.2 Megapixels. that is equivalent to a 720p resolution. So why can we see the difference between 720p and 1080p? because we are not sitting 13 meter away from a 55" TV (that would be the 5° VOF). We are rather sitting 4 meters away from the TV. that translates to a aprox. sharp viewing area of 35 by 20cm. To see this sharp your TV needs roughly 9 times 1,2 Mega pixels so good 10 Megapixels. that would roughly be UHD. So why dont we see a huge diffference betweeen full HD and 4K. well that is because the 1.2 Mega pixels of our eyes that we calculated have basically a Bayern pattern (as most digital cameras) that means every sensor can just see one color, the image gets interpolated to show the right colors and that reduces the resolution. The resulting resolution is not just one third as the sensors also sense the intensity, but more about 1/2 to 2/3. therefor we don't need good 10 Megapixels but just about 6 and that is right between your 2560x1440 resolution and UHD. As long as people sit 4 meters away from a 55" TV they physiologically can not see a difference between a 4k and 8k. And for the difference between full HD and 4k. well yes we can see the difference but 4k is exceeding the capabilities of the human eye in this example. so the difference we sense is not the real difference, it is just the maximum we could possibly see when we have good eyes, are awake and concentrate compared to something a bit less. In the moment we get big video walls and want to sit 2 meters away we will have to calculate again, but for now. everything over 4K is nonsense (besides very special tasks) and full HD is a very good compromise. best regards Jops
  37. 1 point
    HI There is also another solution if you wish to try. You can create an edge along the middle of your geometry. select that edge and convert it into a spline, and then : menu > character > convert > convert spline to joints. (each point will get a joint) If the alignment of the joints is not good, run the character/commands/joint align tool command. hope it helps cheers
  38. 1 point
    @igor that is beautiful, thank you so much! I followed your steps to ensure I can do it myself and it's a great workflow. The scatter is now much much better
  39. 1 point
    Judging by the average person around me, that is definitely not the case. If it was, people wouldn't buy 4k TV's left and right even though they sit so far away that they physically cannot see the difference. And Sony + Microsoft wouldn't have used 4k as their buzzword for their consoles for years now. I think it's much more likely that they simply do not want to pay the extra price for a resolution that renders roughly 4x as long when 1080p is absolutely "enough" in most situations.
  40. 1 point
    looks like a problem with the alpha channel of your picture. try to make a black and white image out of your alpha information and use this in your alpha channel
  41. 1 point
    Sounds like you should just be doing multi-channel projection painting instead of stretching images over UVs in Photoshop. There is no way that I know of to handle the workflow you describe in Photoshop itself. But you could do some manual method with duplicate layers and copy pasting into the same region. But that would be very tedious. If you want to project Color, Normals, Bump etc... all down at the same time you could give my tools a go (they are free). You just setup your material with the images you want to project in each channel, then drag it into the Material slot of the Paint brush. You can then do a lasso select over the entire surface and project that down onto the object. Here is a video that actually uses a Substance Material, but the workflow is the same if you don't have substance and just use a standard material. Edit: Actually looks like you are using version 7 of C4D. So these tools won't help you.
  42. 1 point
    There should be no circumstance where one face has normal map, and others don't. If an object needs a normal map, that whole object should get a normal map, and the same applies for every channel an object needs. The normal pipeline for this is simpler than you are making it sound. All material channels are discrete, so there is no circumstance where (for example) a Normal channel texture is sharing UV space with other channels. Once you have the UV mesh layer in a document, all your maps should line up with that, and if they do, all channels will still be in alignment when wrapped onto objects. Most people start with diffuse map, based on the UV mesh layer, and then all other layers are usually created from copies of that first texture by copying the layer and making adjustments to it for the purpose at hand. But you shouldn't ever need to change anything positionally having done that, so things stay in alignment over the whole model at every stage. Not sure if that answers your question or not, but we might as well have the correct basics written down somewhere so we can at least say we covered them... CBR
  43. 1 point
    Version 2 is more planetary ie the planets move -
  44. 1 point
    It means that this format is not compatible with Cinema's native shaders so they need to be baked first. Baking is just the process of turning procedural textures (and sometimes lighting) into bitmap-based ones. CBR
  45. 1 point
    My number 1 wish: redshift included in C4D as the default render engine, free of charge. Immediate availability and Compatible with metal for Mac users.
  46. 1 point
    Edit 2017: Renamed topic from "Substance Painter Exported Texture Importer (SPETI)" to "SPETI and TINA" The original post was made for Substance Painter textures, but as it applies to any kind textures the plugin has been split off into 2 "versions", read more here: https://www.c4dcafe.com/ipb/forums/topic/92673-substance-painter-exported-texture-importer-speti-v06/?page=3#comment-650070 Original message below and on next pages will still refer to Substance Painter. <end edit> Working with Substance Painter for texture creation I found it a waste of time to manually create each material from the numerous exported textures. Looking at the main character for my short story I had to import 18 texture sets (each 6 bitmap files). Knowing that many more objects had to be textured in the future, I soon started to look how to automate this material creation. For this I looked into scripting, and soon decided to make a plugin, which I could have named "Substance Painter Artistically Generated Exported Texture Importer". But went for the simpler "Substance Painter Exported Texture Importer" instead, SPETI for short. The main purpose of this plugin is thus to create materials from scratch, using the generated textures from Substance Painter as input. Or, in my case, to update the already prepared materials (having only the color channel activated). Since development has only been started, many more features have to be implemented. One of which is the configuration panel were users setup how the Substance Painter textures are to be mapped. Currently, the configuration is very basic, and matches with my workflow. More detailed explanations soon to come. Enjoy! Latest version v1.1 runs on R17 - R21 TINA v11 (R17 - R21).zip
  47. 1 point
    Adding that mean filter but also bumping the iterations up from 1 to 2 on that mean filter really helped a lot! Thanks!
  48. 1 point
    I have updated the plugin to provide an option (via cog wheel) to ignore the outer edges of an open-ended mesh as being detected as UV boundaries. In general you would want to turn all UV seams into edge selections. In case of open-ended mesh objects, the mesh boundaries would be detected as UV boundaries ... while these aren't actually UV seams. The new option will (by default) allow to ignore these boundaries. On the other hand you might want to turn all UV island boundaries into edges, no matter if these are UV seams or not. You can do so by unchecking the new option. This is the behaviour of the original version 1.0
  49. 1 point
    Hi The latest scene made in C4D and Corona Renderer. You can get it at www.vizforyou.com
  50. 1 point
    Hi You can connect the Range mapper to multiple morphs from one single controller. Here is a simple file to show this. You dont need to have multiple range mappers if they all share the same value but in cases where the range is different you can use multiple ones. The main thing is one user data can control as many morphs as you like. To get things setup initially you can use the method @deck mentioned, this can set one one for you in Expresso, then you can simply drag n drop each morph tag into that single Expresso and connect the Range mapper to the Morph Strength input, repeat for more morphs. Here is a video showing how to do this. Dan Posemorph multiple.c4d
×
×
  • Create New...

FORUMS INFO:

Dear members, we are aware of few more bugs that are still present withing the theme.We just wanted to let you know that we are working to fix them as soon as possible.
 

NEW MEMBERS:

Please be aware that we are manually approving all new registrations, due to spam prevention. Please be patient in case you cannot login right away, we will approve you within 12h or less if we decide you are not potential spammer. 

 

Thanks for understanding! :cowboypistol: