Monday, 13 December 2010
Saturday, 4 December 2010
Tuesday, 30 November 2010
Now to the testing!
Orla did a quick stop motion test. Because of technical difficulties, we had to shoot it in low res (788 x 576), while the life action footage is shot in HD 1080.
Here's the stop motion test!
Here's the life action front shot!
So... I tracked the stop motion test, but unfortunately I (again) messed up the set up. There were not enough tracking points to track all of the movement, without any jerky glitches. The first 2, 5 seconds are pretty much useless. Well I guess that's why we do the testing. The last 6 seconds were fine though, so at least I had something to play with.
I prepared (stabilized and masked) each eye in After Effects and exported them as 32bit EXRs.
Then modeled the geometry according to the 3D location of the locators exported from the 3D motion track. Because MatchMover allows you to animate the camera rather than the scene, it's possible that the geometry can just sit in 3D space, while the camera moves around it, making it look like the geometry is moving.
This is the geometry for the projection (not perfect. Gotta spend more time on it once it comes to production).
Next thing I build the set up for the camera projection and painted transparency onto the mesh using Maya 3D paint tool! Here's a screen shot of the geometry with the life action footage projected onto it!
That's all the work done in Maya. Now it's comping!
I rendered out each eye and one render of the mesh with a lambert material override to get some shading. The comping mainly involved color grading, masking out the nose and adding some grain to the CG shots, since the resolution was much higher (although I think I got a bit carried away with that).
I also had to paint out some bits of the puppets eyebrows, that woulg stick out at the beginning when it turns around.
There're still some things to sort out, but the main idea stands. The test was mainly to go through the pipeline and to proof to Sarah that it could be done. She's not that familiar with 3D so she didn't trust the idea at start. But I think I managed to convince her that this technique is feasible and will save her a lot of time over the masking technique from Madame Tutli-Putli.
Monday, 11 October 2010
Sunday, 10 October 2010
Friday, 3 September 2010
Thursday, 10 June 2010
Tuesday, 1 June 2010
I also had to paint in the space between the teeth. The clay model had only a little hole.
Replacing the eyes of the duck, for a wink.
The most time consuming work was on a shot of the sea. I had to make a CG sea with a heart shaped reflection of the sun and the silhouette of the mermaid with the reflection in the middle of the sea.
This little clip had to be reflected in the glasses of Helvis.
Adding glittering stars on disco ball.
Monday, 24 May 2010
The reason why they look all a bit different is that my home monitor is set differently to the ones at uni. I put the renders in Photoshop and adjusted the colours without using the same settings.
Saturday, 8 May 2010
Friday, 7 May 2010
Wednesday, 5 May 2010
Friday, 30 April 2010
Thursday, 15 April 2010
Andy asked me to test the sleeves when the arms are bend, as I kind of forgot to test that first time around. Turns out that this does cause some trouble with the nCloth.
I kept the substep and max iteration really low so it would simulate faster, but that causes trouble when vertices interpenetrate. So I just adjusted some of settings, mainly the quality settings for the nCloth and the nucleus solver, which does come with the cost of simulation and caching time, but the result is hopefully worth it.
Friday, 9 April 2010
Wednesday, 7 April 2010
Wednesday, 31 March 2010
Saturday, 27 March 2010
Friday, 26 March 2010
- create your light (point or spot light, even though it works with volume lights as well, but I never tried it, not a big fan of volume lights!)
- create the fog light (under Light Effects)
- scale the cone (spot light) or the radius (point light) of the light to the desired length of the light rays. -Don't worry about intensity and colour for now-
- create a Mental Ray light shader. Go down in the attribute editor of the light to mental ray, open the attributes in the folder, go further down to custom shader and click on the checker board next to Light Shader, choose either mib_light_spot (spot lights) or mib_light_point (point light) to create a Mental Ray light shader.
- in the attribute editor go the sphereShape node (point light) or the coneShape node (spot light) and check the box Volume Sample Override in the Render Stats section. Give Volume Samples a higher number, but beware that it increases render time so leave it low for playing with the settings and crank it up for the final render. 50 is good for the start.
- now play around with the settings in Mental Ray Light Shader and the Fog Light. Don't worry about any settings on the original light (like intensity and colour) cause the Mental Ray Light Shader completly overrides all those attributes!
- select the light
- change the menu set to rendering
- go Lighting/Shading> Select objects illuminated by light. That selects all the objects in the scene, as the light so far lights all the objects. It deselects the light though.
- shift select the light again
- go Lighting/Shading> Break light links. Now the light doesn't light any objects
- select the light again and shift select the objects you want it to illuminate.
- go Lighting/Shading> Make light links. And that's it. The light will now only illuminate those objects
Sunday, 21 March 2010
Tuesday, 16 March 2010
Thursday, 11 March 2010
Sunday, 7 March 2010
Thursday, 25 February 2010
Tuesday, 23 February 2010
One the most difficult shots we had planned involved 3d tracking. Since Maya 2010 comes with MatchMover we decided to use it for 3D motion tracking of a green box which will later be replaced with a CG gun. Oddne built a huge box out of wood and painted it green with car paint. To see his progress check out his blog http://oddne.blogspot.com/.
The box is quite heavy and like I said quite big. To handle it is difficult and that is a good thing since it will be replaced with a huge bulky metal gun. When it comes to shooting the scene we don't have to worry about acting like the gun is heavy.
We put massive crosses with black duck tape on all surfaces of the box for tracking markers and shot some test footage.
First of all you can see that we didn't use a tripod and a hand-held camera shot is really not good for this procedure. It's not impossible but you would have to track the environment as well which is especially in this case unnecessary work. I realised pretty soon after putting the footage into MatchMover that the crosses are far to big. MatchMover only needs contrast between pixels. The bigger the markers the more likely it is that the track jumps within the the massive amount of black, it just won't see a pattern any more but a big black surface. Then I realised that the tape is quite reflective so whenever it turns to where the light hits it it becomes white. That obviously isn't good for the tracking software which searches for colour differences between pixels so the tracking makers really shouldn't change colours. The main problem though is that I haven't used a 3D tracker before neither have my team mates nor any of my class mates for that matter. Although Georg has used 3D tracking software before he never used MatchMover. So I went home and did all the tutorials I could find and read the manual. Unfortunately there isn't much about 3d motion tracking in the manual or online. 3D tracking software is mainly used to imitate a camera track. The manual had a section about tracking moving objects so I followed that route. It says I would have to make a mask over the moving objects (in this case me and the box) then auto track the environment, invert the mask to only track the moving object(in this case only the box) and track this information into a different group. After a couple of tries I managed to solve the 3D camera, but the points where far from accurate. Frustrated and tired I returned to uni on the next day only to find out that Georg found an easy way to track objects by importing a mesh into the scene and attach its vertices to a tracking point. I tried it and it didn't work but only because I couldn't track enough points to solve a camera. All in all I would say that this test footage is absolutely useless. But I guess that's what test footage is for. I did pretty much every mistakes you could do when preparing a shot for 3D tracking, but at least I know better now.
So when the day of the shooting came I measured the box, built the exact replica in Maya with a few more divisions on each face evenly spread and measured the distance between each vertices to tape tiny squares of insulation tape(smaller and less reflective than duck tape!)to the right place onto the box.
That is the shot we're going to use. I'm going to talk about the experience of the green screen shoot in another entry. The footage looked alright but I still wasn't looking forward tracking it. I haven't exactly had a successful solve yet and the amount of hours that go with it making the whole tracking experience a rather unpleasant matter.
I locked myself in my room (not literally) for the night with a few energy drinks and a big pouch of tobacco. 5 hours later, all the energy drinks gone and my tiny room stinking of fags, I had 45 hand tracked points. While waiting for the solve to finish I tried to think of ways to still make the shot if it doesn't work (which I was expecting). Once it finished I changed to the 3D view to check the mess I just made and I saw this...