Jump to content

NWoolridge

Regular Member
  • Content Count

    8
  • Joined

  • Last visited

Community Reputation

9 Noble Beginner

Profile Information

  • First Name
    Nicholas
  • Last Name
    Woolridge
  • C4D Version
    R18.041 Studio
  • Location
    Toronto, Canada

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. I’m not from MAXON, but contact MAXON USA and ask for their educational grant application form.
  2. Once Prorender is working for you, you will see the need for this. It allows you to define what kind of new node materials you are making. As more render engines use the node system (if that happens), the need for this will only increase, so that you don’t use incompatible nodes within a scene...
  3. Many people are disparaging MAXON's new CEO as some sort of "marketing guy from Adobe", but this is false, and unfair. His previous roles were all in engineering and engineering management, not in marketing, and he would in all likelihood not have been involved in Adobe's licensing changes over the last few years. Its fine to express all sorts of opinions about the changes in MAXON's licensing, but let's not resort to that kind of ad hom. It's not productive.
  4. I’m just a beta tester, and not privy to the developers plans, so take this with a grain of salt, but: it may be a while, unfortunately. This plugin relies on the work Apple has done in ARKit to allow for reliable mobile face tracking, and in the 3D sensing hardware of the iPhone X (it is rumored that the TrueDepth camera will be spreading to other iOS devices in the future). Google is responding to ARKit with ARCore in Android, but it is not as mature, and does not have facial tracking. The earlier google AR effort, Tango, relies on specialized hardware, which might have been able to do face tracking, but they seem to be focusing on ARCore now.
  5. Cross- post from CGTalk: Hi all, I was lucky enough to be a beta tester for a cool new plugin from Cineversity: CV-AR. CV-AR has two components: an iOS app that works with the TrueDepth Camera of the iPhone X to do facial motion capture; and a C4D plugin that takes that data and makes it available within C4D. Donovan Keith has an intro tutorial up on Cineversity at: https://www.cineversity.com/vidplaylist/cv-ar The free iOS app is available at: https://itunes.apple.com/ca/app/cv-ar/id1378696551?mt=8 The iOS capture tool leverages Apple's ARkit to provide realtime capture of 51 facial movements, as well as eye orientation and head position. The plugin is a blast to work with. The initial textured mask you get lets you know how well the performance was captured, but that's just the beginning. You can then use the blendshape strength data streamed from the capture object to drive your own rigs. I think it has real utility for animators, both as a way to provide reference for hand-tuned animation, and as a way to speed a more automated animation workflow. Here is a sample movie with the capture object mask on the left and my rig driven by that data on the right. At this point I hadn't quite nailed some of the morph target sculpting, and the head movement needs adjustment. Let me know if you have any questions about how this works... https://www.dropbox.com/s/aw5dev1iu...hapes2.mp4?dl=0 Props to Kent Barber for developing the plugin, and to Rick Barrett and the folks at Cineversity.

Latest Topics

Latest Comments

×
×
  • Create New...