Jump to content

Leaderboard


Popular Content

Showing content with the highest reputation since 07/28/2020 in all areas

  1. 11 points
    If I may, I love Cinema 4D and my intent with my Avatar is not to profess fanatical devotion to Blender. Now, Blender is a good program. It's users do have a lot to be happy about. I did try it when the interface improved with 2.8. But despite those improvements, it is still a bit clunky. Everything is there and for the most part stable, but (IMHO) the UI unnecessarily gets in the way of fully enjoying the program. In short, C4D is a lot more fun to use! So why do I have that Avatar? To serve as a reminder to MAXON's leaders should they troll the site that the hobbyist community does have a pretty valid option other than C4D for our CGI fix. Yes, we could follow the subscription plan, but we are hobbyists....we use C4D for the love of it and not as part of a business model. As for me, should my personal financial circumstances change such that I can no longer afford to stay current with the program, I still want to have something that works and not watch years of work go away when my subscription turns off. You can ONLY keep C4D on with a permanent license and those costs have increased significantly (from $620/year in 2017 for the Studio MSA to ~$950/license upgrade). So my avatar is really part of that old argument of subscription vs. permanent licenses. I am a hobbyist that wants permanent licenses as long as it remains affordable for a hobbyist. Blender as an open source program will always be affordable. Blender gives us options....a reminder for the MAXON employees and CEO who visit the site and not intended to disrespect its members. Dave
  2. 3 points
    Thanks for mentioning the RingLoop plugin @Cerbera @thanuleeOnly wanted to point out that I fixed a bug in the "Ring" functionality, and redesigned the plugin to provide for separate "Ring" and "Loop" commands (on request of a user wanting to be able to assign separate shortcuts). Additionally, I have also made the "Skip" amount scriptable. Example scripts are included in the newest version 0.4 Again, you can assign shortcuts to any of these scripts, allowing for a speed up in workflow if you frequently require a particular skip-amount.
  3. 3 points
    Ringloop - The lovely free plugin on the very front page of this site kindly shared by @C4DS?! CBR
  4. 3 points
    That screenshot in the post is definitely not R7. Interface looks way to fresh imho. Okay, so others in the thread have already explained how texturing for game engines works. It's an entirely different beast to something that you would do in C4D or other 3D software. Unless you're using Triplanar Mapping, probably everything is going to need a UV map. Even a flat plane. A game engine simply does not have concepts like cubic or spherical mapping unless you specifically write a shader for it. I personally have not tried or used kbar's 4D Paint yet but that is mostly because I use Substance Painter / Substance Designer for my texturing workflow. These two tools are pretty much industry standard and they do everything you could ever need for game engine texturing and especially Painter is pretty easy to get into. Photoshop as texturing tool is simply outdated by now and has been for years. People definitely work on diffuse and other textures in it, but they certainly do not use it as a texturing tool. It's too tedious and lacks even basic features for the quality of texturing that is expected these days.
  5. 3 points
    @3D-Pangel That is straight from the Redshift Render Engine, no grading applied, not even bloom. Explosia FX vs TFD: These past few weeks I've gotten to learn Explosia Fx, as well as Turbulence FD. Both have their advantages and disadvantages when dealing with Fire & Smoke, but in essence they're the same. For Explosia it works really well with X-Particles because of the use of Modifiers. Turbulence FD works with X-Particles as well, but there's a bit of a learning curve. TFD is very technical, whereas Explosia FX you can hop on it and not worry about the right values, its just plug and play. TFD has GPU rendering which is huge, if X-Particles had that, TFD would be less favored in my opinion. VDB & Artifacts: When I went over to TFD to redo the render that I did in Explosia, I came across a problem that I never encountered before, and that is Artifacts, mainly when dealing with Density/Smoke. The Artifacts [stepping] look like steps, like a Minecraft cloud. To fix this, you have to make sure that your Density values in TFD don't go exceedingly high, or to be safe, don't let them go past the value of 1. Example: If you set you Density channel to 1 in the Emitter, the value won't go past 1. However, when working with fuel in TFD, it adds Density every second. You can refer to the Jawset Fourm here where we break this down, and tell you how to tailor your channels: https://forum.jawset.com/t/getting-stepping-boxy-renders/1071/14 I'm guessing the value setting in Explosia FX is the Explosia Tag, as well as the Physical Data in the XP Particles. Regardless, there's always some sort of stepping, even the first animation I did, however you won't notice it unless you're super close up, and/or looking for it obsessively. Voxel Size: A smaller voxel size means greater detail of your simulation, it also means longer render times. The new Upres feature in X-Particles is buggy. When I check it on, it does in fact change my simulations. My advice is that if you're Uprezing via Explosia FX, only have the Voxel Size changed, and all of the other values in the Upzezing option, bring them to 0. That way your simulation won't drastically change. I learned that the hard way when I checked on upzezing in Explosia FX, and my render was completely different. Shading Emission & Density in Redshift: If you're rendering the Density channel by itself, which then ties into the Scatter & Absorption Channels, then it's literally plug and play. If you're using the Emission channel however, there's a secret sauce to it. When shading the fire/emission, your Gradient, as well as the Advanced Tab is what you mess with to balance the smoke and fire look. With the Emission Gradient, I have two Black Knots to the left side. I take the 2nd Knot that's to the right and push it just a little bit towards the center, just by a hair. With that, the flames looks sharp and define. Also, if you want more smoke and less fire, taking that 2nd knot and sliding it towards the center gets rid of flames. In the Advanced Tab, playing around with the New Max values under Emission Remap Range can lessen the intensity of the flames, to give the density a more defined look, and messing with the New Max values in the Density Remap Range also brings out the detail in the smoke, BUT if you increase the value too much, you'll get Artifacts/Stepping. Once you know what each value and gradient does...then you'll have better control of the look of your fluids, and from there its an amusement park. I've attached stills of my settings for a explosion I rendered using Turbulence FD. Instead of rendering straight out of TFD to Redshift, I converted the bcf files to VDB, because the VDB workflow is faster [trust me, I did a lot of tests]. My theory is that TFD uses one GPU through Redshift, even though Redhsift can use more then one. It takes a long time to get what's rendered on file with a bcf file. However, VDB files are much easier to read it seems, even though the size if much bigger then a bcf file. Conclusion: Play with the simulation values and see what you get. Learn the terminology of these simulations, like fuel and desnity, so you'll know how they work together. To avoid artifacts, make sure your Density values aren't so high where you'll see stepping; lowering the voxel size doesn't fix the issue [refer to link to Jawset Fourm]. Shading you fire and smoke is the real key to "the look" of your simulations. Know what each parameter is and what it does. Here's a link to Redshift's Volume Render the explains to you what the options do: https://docs.redshift3d.com/display/RSDOCS/Volume+Rendering?product=cinema4d When its ready, I'll post the TFD explosion I made, which are from the stills that you see below.
  6. 2 points
    I started on this awhile back, collecting texture maps from NASA and wanting to add another educational project to my portfolio. I reloaded it with text instead of my lame narration. Sorry for those who had to hear that!
  7. 2 points
    I just wish MAXON would make an Indie version of C4D that's a bit more affordable for hobbyists. The affordable cut down editions were great for that (Mograph edition had enough features for me while being significantly cheaper than Studio), but they have been eliminated.
  8. 2 points

    Version 0.4

    12 downloads

    RingLoop is a small Python plugin which allows to extend selected edges into ring or loop. An optional "skip" option can be provided, which will skip the number of edges during creation of a ring or loop from the original selected edge(s) Original thread: https://www.c4dcafe.com/ipb/forums/topic/102983-select-every-other-edge New version 0.4 (see changelog)
  9. 2 points
    hmm... I know of a Delete without Children command, but actually I haven't seen a Copy without children. But since I haven't smuggled an ad for my Patreon into a post for a while, here's a script that duplicates the currently selected object and inserts the copy directly behind the original: import c4d from c4d import gui def main(): if op == None: return theClone = op.GetClone(c4d.COPYFLAGS_NO_HIERARCHY) theClone.InsertAfter(op) c4d.EventAdd() if __name__=='__main__': main() (Now waiting for someone to explain that there is such a copy already... ) ---------- Learn more about Python for C4D scripting: https://www.patreon.com/cairyn
  10. 2 points
    If your animation is based on transformation parameters like PSR they can be wired in XPresso relatively easily. Spline modifier in Range Mapper node can be used to create offset in animation. example
  11. 2 points
    Seeing the answers so far, I may be on the wrong track. Please bear with me, if I'm running into the wrong direction. As I understood the question, the goal is a polygon selection based on Fresnel. Now, for me in this case Fresnel is basically the angle between two vectors (actually I think, it's rather the change of the angle of a ray of light, when entering a material with different optical properties (still way simplified)), the vector defining the direction of the polygon (normal vector) and a camera ray. Now, this could be achieved with a bit of Python. But we shouldn't reinvent the wheel, but instead give some credits to Donovan Keith, who already used the Python tag to create the "CV-Parametric Selection Tag". The CV-Parametric Selection Tag provides us with means to select polygons based on different conditions. One of them being the direction a polygon is facing. This is actually already one pat of the equation, we are interested in, the polygon normal. Now, in order to answer the question correctly we'd need to take the camera ray which hits the polygon into account. But to make it a bit simpler (and also because it's probably not possible with this approach), I will only use the direction from object origin to camera position instead of the actual camera ray. Using these ingredients, I get the following: I doubt, I'd be allowed to upload a scene, because Donovan's tag is actually Cineversity content. Instead, here's how I set it up: A very small and simple Xpresso tag to feed the view direction from the camera into the CV-Selection tag: Data type for subtraction has to be Vector, the "Vector" input of the CV tag is the one from the "Facing" parameter group (see below). And finally the CV-Parametric Selection Tag: As said before, I utilize the "Facing" condition. With above setup it would select those polygons facing the camera with a certain tolerance- That'SA why the "Invert" checkbox is set. Playing with "Tolerance" you can change the amount of selected polygons. Depending on what you are after, you may also want to set "And Opposite" to also care for the backside. As said before, this is not a mathematically correct solution, but I think depending on the view parameters and an objects geometry, it may already be sufficient. And hopefully serves you as a starter for better solutions. Cheers Additional notes: Due to C4D's handling of Selection tags, Donovan's tag can be a bit finicky to use. Be a bit careful, save often and read the tag's docs carefully (are there any? I didn't check, but given the internal implications thee should be...). Hopefully, what MAXON just demonstrated with Neutron will enable us in future to achieve tings like this way easier.
  12. 2 points
    And if people keep doing that (sometimes more than once), and happily display both an unwillingness to put in the work themselves and an unwillingness to actually pay for professional help (not to mention a reputation and relationships of their own), the few who are willing to answer questions for free get less and less willing, until the forum is finally completely dead. People have no clue any more how forums are (ideally) working.
  13. 2 points
    Interesting.....as Neutron Man (or Neutrino Man)...is Srek wanted for going to fast? Did he exceed the speed of light? Will Srek be arrested by Albert Einstein? Dave
  14. 2 points
    There should be no circumstance where one face has normal map, and others don't. If an object needs a normal map, that whole object should get a normal map, and the same applies for every channel an object needs. The normal pipeline for this is simpler than you are making it sound. All material channels are discrete, so there is no circumstance where (for example) a Normal channel texture is sharing UV space with other channels. Once you have the UV mesh layer in a document, all your maps should line up with that, and if they do, all channels will still be in alignment when wrapped onto objects. Most people start with diffuse map, based on the UV mesh layer, and then all other layers are usually created from copies of that first texture by copying the layer and making adjustments to it for the purpose at hand. But you shouldn't ever need to change anything positionally having done that, so things stay in alignment over the whole model at every stage. Not sure if that answers your question or not, but we might as well have the correct basics written down somewhere so we can at least say we covered them... CBR
  15. 2 points
    Alright alright, I'll try... even if it goes against my nature as a german I, DasFrodo, thoroughly aknowledge the existence and the possibility to observe these digital renditions of a F1 concept car. Please consider continuing working on them, as I can derive a certain amount of enjoyment off of them. Thank you.
  16. 2 points
    I've spent these last few weeks learning about the Physics of fire and how to create certain flames. As well as Shading the Volume once it's complete. Here's a still of the first Explosia Render that I did, with the Density and Flames shaded correctly.
  17. 1 point
    Daesu, a Korean historical headress reserved for queens and crown princesses and worn at wedding ceremony. Inspired from @netflixph Kingdom series Rendered in Cycles4D. Interestingly, I have grown fond with the renderer
  18. 1 point
    You could use a couple of effectors parented to your camera like the file and grab below, but just like using any effector this only hides the clones and therefore doesn't help with any speed increase. to do that you would be better off changing the floor shape ( or a false / hidden floor shape ) to similar triangle as camera view, obviously that would work for a still but probably not so much for moving camera. Deck c4d278_cull_objects_outside_camera_view_0001.c4d
  19. 1 point
    Maybe the LOD object is an option.
  20. 1 point
    Nicely done! I felt like I was at a kiosk in a science museum. Very professional and some good information. I also like how you showed the orbital path of the moons with their inherent wobble. Plus the stars rendered very well....no flickering (there could be a whole tutorial on how to render star backgrounds properly...with and without motion blur. It is not as trivial at task as you would think)! So very well executed. The only thing I questioned was the size comparison of Mar's moon's versus our Moon. Now there are pictures at the Nasa.gov site which do match pretty close to what you showed. But Earth's moon is 3475 Km versus Mar's moons at no more than 27 Km. If you were to actually make this to scale, it would look like this: Yeah...some artistic license needed to taken. Dave
  21. 1 point
    Cheers Bezo, I didn't have my layer mask on bottom like that, I was trying to put a black or white under it so that could have been it, tho Im having trouble reproducing the problem this morning and off to work shortly. Never would have occurred to me to have a layer mask with nothing beneath it but it makes sense now I see your example. Many thanks Deck
  22. 1 point
    There is a shader field and it works with the fresnel shader as well. Update is a bit limited though. You can set it to update each frame but when navigating on a fixed frame it won't update live. You can preview the effect by running the scene though.
  23. 1 point
    Couldn't you just get the animation sorted out as keyframes, and then use a slider to advance / reverse the timeline ? Not that I have ever done that but presume it must be possible... CBR
  24. 1 point
    Well this is just f**king marvellous !!! Have been missing that tool for so long now... since the 3DS max days in fact. Well done and thank you for bridging that long-standing gap in Cinema's selection tools...
  25. 1 point
    HI There is also another solution if you wish to try. You can create an edge along the middle of your geometry. select that edge and convert it into a spline, and then : menu > character > convert > convert spline to joints. (each point will get a joint) If the alignment of the joints is not good, run the character/commands/joint align tool command. hope it helps cheers
  26. 1 point
    No, I am not. I wish, I were. Actually the entire idea of such a selection tag is from noseman. Back in those days, he made Donovan and me develop such a tag in parallel. When we discovered we were doing the same in parallel, I simply dumped mine. But since then, I have quite a good idea, of what can be done with this approach.
  27. 1 point
    LOL! Nice to know that I have added the quality of Srek's worklife. Please forgive me Srek! Please know that I at least had a good laugh...hopefully, you did to! Dave
  28. 1 point
    No, it's really not. I can see it for stills, absolutely. But for animation, unless you're in the VFX industry for big budget movies like the Marvel movies it's just a waste of money. I forgot where I saw that, but you need either a really big TV or be pretty close to the screen to even have a noticeable difference between 4k and 1080p. For me the ideal resolution right now is 2560x1440. It's the perfect middleground between FHD and 4k. It looks great, is noticeably sharper and not a waste of resources.
  29. 1 point
    Yep I think you do need to make the head separate to both the beer and the glass - I'd make all 3 with Lathes. The reason for this is that light travels differently through foam to the way it does through beer (and clear glass) so just far easier to handle that as separate geo. I take it you are aware of the excellent 3D fluff tutorial on how to texture these things ? CBR
  30. 1 point
    Judging by the average person around me, that is definitely not the case. If it was, people wouldn't buy 4k TV's left and right even though they sit so far away that they physically cannot see the difference. And Sony + Microsoft wouldn't have used 4k as their buzzword for their consoles for years now. I think it's much more likely that they simply do not want to pay the extra price for a resolution that renders roughly 4x as long when 1080p is absolutely "enough" in most situations.
  31. 1 point
    looks like a problem with the alpha channel of your picture. try to make a black and white image out of your alpha information and use this in your alpha channel
  32. 1 point
    Archimedes made a 2 slot trammel that drew ellipses. https://en.wikipedia.org/wiki/Trammel_of_Archimedes I made one in C4D a while ago that had 3 slots Recently I saw this device on YouTube, and thought I'd try it with 2 spinners I imagined I'd need some trig to make it work, but it seems the small object just has to rotate 2X the speed of the large one. Not sure why... I got a bit carried away, adding planetary gears I think epicyclic gears are used in electric screwdrivers, and Sturmey-Archer 3 speed bike hub gears. Epicyclic gear math can be a bit difficult, but you're welcome to look at my methods trammel 9.c4d here's a regular epi system that has sliders for speed epi.c4d
  33. 1 point
    Sounds like you should just be doing multi-channel projection painting instead of stretching images over UVs in Photoshop. There is no way that I know of to handle the workflow you describe in Photoshop itself. But you could do some manual method with duplicate layers and copy pasting into the same region. But that would be very tedious. If you want to project Color, Normals, Bump etc... all down at the same time you could give my tools a go (they are free). You just setup your material with the images you want to project in each channel, then drag it into the Material slot of the Paint brush. You can then do a lasso select over the entire surface and project that down onto the object. Here is a video that actually uses a Substance Material, but the workflow is the same if you don't have substance and just use a standard material. Edit: Actually looks like you are using version 7 of C4D. So these tools won't help you.
  34. 1 point
    @3D-Pangel Yes, the stepping problem only presented itself in TFD, it just so happens that I was using Redshift. But as I said before, I went back to my first Explosia Render, and there is some form of stepping, just very minor, and you only notice it if you're looking for it . It only matters when its showing clear as day, like this picture here. I even have a VDB cloud pack, highly detailed, but at some unnoticeable corners, there's minor stepping. In recent tutorials, people used a density value of 1. So, when you follow along those tutorials, those artifacts don't show up because the Density values are safe. Even current Redshift & TFD tuts, they always use a value of 1 in Density, so the problem never occurred. From my tests, having a value between 1 and 3 are ok. For the tight close up shots though, Density value of 1 is best. Before, I was following along this 6 year old tutorial where this dude was adding a Density value of 20 in the Fuel parameters, meaning, every second it was adding a density value of 20. By the end I had a Density value of 327, ridiculously high! When I rendered that in Redshift, my Smoke looked so blocky that I didn't understand why [picture attached]. I created that topic in the Jawset Fourm, and if you'll look you'll notice my frustration at first, haha! When it was explained to me and broken down, I did some new tests and then I started to understand how it worked. I'm still learning though, but I believe I'm on the right track. The reason why that guy put a density value of 20 in that 6 year old tutorial, was because he was using the standard render, and for your smoke to appear more dense, he increased the values to an absurd number; that's just my theory. With current renderers however, the settings are sensitive, thus normal values are appropriate. I'm glad this information helped you, let me know if you come across any RS problems, and I'll do my best to answer them.
  35. 1 point
    Version 2 is more planetary ie the planets move -
  36. 1 point
    That was a master's class in gaseous fluids - particularly the Redshift section which I am just beginning to grasp (having recently purchased it)! Thank you! The link you provided on VDB artifacts is a wealth of information. Were the stepping problems you experience noticed in TFD or only when you tried to render with RS? As for me, I always use a density value of 1 in TFD...not sure why but probably because most tutorials use that setting. They rarely explain why though and this is the first explanation I have read (especially the post from Jascha Wetzel). His caution about Fuel settings though and the Fit Range settings are invaluable. Again, thank you! Dave
  37. 1 point
    Lovely job with sculpting, lighting and camera work, RS rendering, and sound design in this epic 20 minute music video by OVERWERK... https://www.MAXON.net/en-gb/news/case-studies/advertising-design/article/lost-in-thought/ Awesome level of detail - right down to the dust... CBR
  38. 1 point
    And this is where I don't like that you disable Post Voting for Admins. How am I supposed to like this without the button ?
  39. 1 point
    Looks like Tim from HelloLuxx. This works robot_0002.c4d
  40. 1 point
    A lot of people moved to instant messages platforms like Slack, Discord or Telegram. There are more local chats and communities. And communities around educational platforms and personalities. Also, as @MighT said above, people want instant solution to their problems, so they post question to every IM chat, and if there is no answer after 15 minutes they go to google or forums. Not the other way around. Also it's about the form of messaging. Forums are all about long thoughtful post, at least for me. Chats -- for one line few words phrases, instant response and constant flow of messages, that create an illusion that something is always happening. Compared to that forums are pretty slow and still. Like for most of the people there is not enough activity and life on forums, but they will not create that activity themselves. As for Neutron discussion, from what I've seen very few people understand what it is or can be in the future. And modern people are hard to surprise with anything. In last few years I've never seen a real discussion of new software features that attract a lot of people. Whining about subscription, costs or Blender doesn't count, cause it not the features. It's almost like nobody really cares. It's very sad, because for me every new version of any software is interesting, some of them fun, some inspiring and give a hint to bright future.
  41. 1 point
    Just added a getting started guide to help everyone get up to speed painting with stamps, stencils and UDIMS.
  42. 1 point
    My number 1 wish: redshift included in C4D as the default render engine, free of charge. Immediate availability and Compatible with metal for Mac users.
  43. 1 point

    Version 1.0.0

    14 downloads

    I have been using "Set Selection" on many occasions. Be it to create selection tags to apply different materials to an object. Or simply as kind of a clipboard to temporarily hold a set of selected polygons during modeling. However, in most cases I do not have enough with a single selection tag. It can happen that during a modeling session I need a few temporary selections, to be picked up later in the process when I need to work here and there on a model. As such, in the past I had a love-hate relationship with the "Set Selection" command. It was a very useful tool, except that it required me to always deselect the newly created selection tag before I could create another one. Reason for this is that if you perform a "Set Selection" with a selected tag, the newly selected items would be merged into the selected tag ... instead of being created in their own separate tag. I mostly use the Commander to quickly type "set se" and press enter. Or I would add the "Set Selection" icon into a modeling palette and dock it in the layout. Still, in order to be able to create multiple selection tags, I would need to execute the command, deselect the tag, and proceed with creating a new selection. NOT ANYMORE ... It finally annoyed me so much that I spend some time writing a script to provide the functionality to perform a "Set New Selection" ... NOT overwriting the current selection tag. This script will create a new selection tag of its own, use an appropriate unique name (just as the native "Set Selection"), store the selected items be it polygons, edges or points. I call it: Set New Selection. The good thing is, that you can execute this script from the Commander, or drag the icon into a palette and dock it into the layout. AND it can coexist next to the native "Set Selection". Which means you can still use the original behaviour if you want to overwrite a selection tag, or use the new one to create separate tags each time the script is executed. Isn't that neat? Yes, I thought so too!
  44. 1 point
    Edit 2017: Renamed topic from "Substance Painter Exported Texture Importer (SPETI)" to "SPETI and TINA" The original post was made for Substance Painter textures, but as it applies to any kind textures the plugin has been split off into 2 "versions", read more here: https://www.c4dcafe.com/ipb/forums/topic/92673-substance-painter-exported-texture-importer-speti-v06/?page=3#comment-650070 Original message below and on next pages will still refer to Substance Painter. <end edit> Working with Substance Painter for texture creation I found it a waste of time to manually create each material from the numerous exported textures. Looking at the main character for my short story I had to import 18 texture sets (each 6 bitmap files). Knowing that many more objects had to be textured in the future, I soon started to look how to automate this material creation. For this I looked into scripting, and soon decided to make a plugin, which I could have named "Substance Painter Artistically Generated Exported Texture Importer". But went for the simpler "Substance Painter Exported Texture Importer" instead, SPETI for short. The main purpose of this plugin is thus to create materials from scratch, using the generated textures from Substance Painter as input. Or, in my case, to update the already prepared materials (having only the color channel activated). Since development has only been started, many more features have to be implemented. One of which is the configuration panel were users setup how the Substance Painter textures are to be mapped. Currently, the configuration is very basic, and matches with my workflow. More detailed explanations soon to come. Enjoy! Latest version v1.1 runs on R17 - R21 TINA v11 (R17 - R21).zip
  45. 1 point
    Adding that mean filter but also bumping the iterations up from 1 to 2 on that mean filter really helped a lot! Thanks!
  46. 1 point
    Adding a 'Mean' filter cleans up a small amount.
  47. 1 point
    I have updated the plugin to provide an option (via cog wheel) to ignore the outer edges of an open-ended mesh as being detected as UV boundaries. In general you would want to turn all UV seams into edge selections. In case of open-ended mesh objects, the mesh boundaries would be detected as UV boundaries ... while these aren't actually UV seams. The new option will (by default) allow to ignore these boundaries. On the other hand you might want to turn all UV island boundaries into edges, no matter if these are UV seams or not. You can do so by unchecking the new option. This is the behaviour of the original version 1.0
  48. 1 point
    Dear fellow C4D users! We are super happy to announce that our big update version 1.1 is available now for all. for our existing customers of course as a free update! The v1.1 update comes with a totally redesigned new and simpler to use GUI /user interface, and many new powerful features, based on your feedback, and several fixes. Some excerpt of the new feature List: –New streamlined GUI, with coloured “tabs”, that reflect the usual “1-2-3-4” setup work flow. new and improved drawing, new shape mirror and scale options -new axis tool, axis rotation tool to define texture directions, and axis few mode -new selection, box select and brush selection tools -new shape subdivision tools -new shape instances feature assign colour, randomize colour, randomize text ID, now can work be limited to selections –new shape instances feature, and new instance view mode new optimize command to automatic remove all double/overlaying shapes in one click -new “manage library” button replacing the old load and save buttons. -adjustable preview size of shape pre-sets -and its window -new shader pre-set library feature- -new Tex direction modes menu, to define the direction of textures in relation to the new axis -new flip modes, flip u, flip v, flip u & v, with % setting how many & of the textures affected by it. -new use seamless texture mode: it allows random offset, also if the shape scale is at 1 or lower -new “generate seamless tiling” option to use a “pseudo randomness” on the border to generate seamless shader or baking results.also if the shader not covers all of the object. Q-TILE-PRO makes sure it will be always seamless automatic, regardless if the mapping covers all surface in one tile or not. new round corner feature (separate from edge rounding, as many wished) new layer transparency via texture (alpha mask per layer) we also started a new You Tube Channel/Playlist for Q-TILE-PRO, with several new videos for learning: https://www.youtube.com/playlist… please subscribe to our new INFO CHANNEL:) we hope you like our new Q-TILE-PRO version as much as we do! Stefan Laub & The Qucumber.at | Q-TILE-PRO team! https://3dtools.info/q-tile-pro/
  49. 1 point
    Hi You can connect the Range mapper to multiple morphs from one single controller. Here is a simple file to show this. You dont need to have multiple range mappers if they all share the same value but in cases where the range is different you can use multiple ones. The main thing is one user data can control as many morphs as you like. To get things setup initially you can use the method @deck mentioned, this can set one one for you in Expresso, then you can simply drag n drop each morph tag into that single Expresso and connect the Range mapper to the Morph Strength input, repeat for more morphs. Here is a video showing how to do this. Dan Posemorph multiple.c4d
  50. 1 point
    I've used ToonBoom Storyboard Pro to create the storyboard and animatic for two short films now. Each time, I've gotten to the point where I needed just a little more work on the animatic to make it really useful, but instead completely abandoned it in favor of blocking and rendering a pre-vis in C4D. I simply get to a point where I feel I waste my time trying to refine my mediocre drawings, while I can get a much more refined sense of motion and timing with pre-visualization (really rough blocking/animation/renders). Later, I'll reach another point where having a current storyboard is crucial for managing rendering - shot numbers with accurate run-time/frame count. So I end up replacing my drawn storyboards with stills from my pre-vis. Does anyone have a different storyboard/pre-vis workflow that is working for them?
×
×
  • Create New...

FORUMS INFO:

Dear members, we are aware of few more bugs that are still present withing the theme.We just wanted to let you know that we are working to fix them as soon as possible.
 

NEW MEMBERS:

Please be aware that we are manually approving all new registrations, due to spam prevention. Please be patient in case you cannot login right away, we will approve you within 12h or less if we decide you are not potential spammer. 

 

Thanks for understanding! :cowboypistol: