Portal Effects in Blender

First, the demo. https://youtu.be/zLoxJ-GuwFk?si=xn0Pb8ptT-HmdFiV

There are two parts to this. The first is the geometry nodes setup to get a moving object that approaches a portal to disintegrate as it passes through the source portal and rematerialize at the target portal, going in the right direction at the right orientation, etc. The second is the geometry nodes setup and shader nodes setup which has each portal instance show what's visible through the other end of the portal. We will start with the latter because it'll drive how we implement the former.

I must make it clear that I did not invent the portal view technique. I followed a recipe video on YouTube which itself didn't do things maximally efficiently, nor entirely correctly. I'll walk through the process and explain things as I go.

First, a little math. Blender represents an object's Position, Rotation, and Scale as a 4x4 transformation matrix of numbers. A matrix is a data type which has certain operations that can be performed on it, most notably Multiplication and Inversion. You can think of multiplying two matrices as "apply this other transform to this existing transform". Unlike multiplying two simple numbers together, matrix multiplication is not communitive. The order in which you multiply matrices is important. Matrix inversion is kind of like "flipping the fraction": 2/3 flips to 3/2. If you multiply a matrix by its inverse, you get the Identity Matrix (basically, the "null" or "do nothing" transformation). In terms of transformation matrices, inverting a matrix can be thought of as "undoing" the transformation. If you have a transformation matrix that represents positioning at 3,7,9 and rotating 30°,-20°,17° and finally scaling 2,4,8, the inverse would be scale at 1/2, 1/4, 1/8 then rotate -30°, 20°,-17° and finally position -3,-7,-9.

Blender's Object Info node in Geometry Nodes provides the object's position, rotation, and scale both as individual components, as well as a combined transformation matrix (at least in version 4.4.3). This means we'll have an easy time manipulating the transformations by just multiplying matrices together. Unfortunately, the shader nodes do not yet support matrix operations, so we are going to have to hack things a little.

This technique makes use of the Instancer, as we're going to store attributes on two instances of the portal window. It allows us to name the attributes with a consistent naming scheme and the shader nodes will just collect the right values for each individual instance.

PART ONE: Setting up the portal window instances

Add a new Empty to your scene and for the time being, place it off to the side. Name it "Portal Out". Create a new plane and leave all the size and rotation parameters at their defaults. This will act as the portal window. Name it "Portal In".  Now the plan is to place the plane where you want the "ingoing" portal, and the Empty where you want the "outgoing" portal. Apply a new Geometry Nodes modifier to the plane and switch to the Geometry Nodes editor.

We need to start with Object Info for both the plane and the Empty. Add two Object Info nodes, and on one of them choose the Portal Out object and connect a Self Object node to the object input of the other node:

We're going to make 2 instances of the portal window, so connect your Group Input to a Geometry To Instances node, and then a Join Geometry Node to the Goup Output node:

One instance will go straight through and the other will be transformed somehow to be properly instanced on the Empty. We'll start like this (note that I've set the Transform Geometry node to work with a Transformation Matrix, instead of the individual components):

Finally, we're going to need to capture Position, Rotation, and Scale components for each instance because the Material for the portal will need them, and unfortunately at this time the Material nodes have no matrix operators. So, make the following node setup and convert it into a Group, because we'll need it twice (note that we're capturing these attributes in the Instance domain - each instance will capture its own unique values):

The group inputs will be Geometry and a Transformation matrix. Someday, the Material editor will support matrices, and this silliness won't be needed - we'll just capture the matrices directly. Let's capture that info:

OK, so now what to capture? The Ray Portal BSDF shader node requires Position and Direction input vectors. The transform information from the Portal In object needs to be provided to the Portal Out instance, and vice-versa. But each object has its own transform information, so the transform we provide has to be the relative transform between the two. If we take the transform from the Self object and invert it, that effectively "undoes" the transform that brought that object to its current position, rotation, and scale. If we then multiply the Portal Out's transform by this inverted one from Portal In, we effectively compute the transformation from Portal In into Portal Out. In other words, we've figured out how to subsequently translate, rotate and scale from the Portal In object to get to the Portal Out object. We will need this final result for the shader.

And of course, we need that same transformation information that goes the other way for the other instance:

And finally, we want to apply the transformation of where the Portal Out empty is to the inverted transform of where the Portal In object is and assert it onto the 2nd instance. Note the order of the multiplication here - we're taking the inverted transform of 'self' and applying the transform of 'out' to it:

We apply it this way because otherwise the instance is going to appear relative to the Portal In object, and we want it to appear to be independent. By taking the transform of the Portal In object and inverting it we're effectively computing how to move the item back to the "ground state" of no translation, rotation, or scale. By then subsequently applying the "Portal Out" transform, we're figuring out exactly where to place (and rotate and scale) the Portal Out instance of the plane.

You should now see a second plane located where the Portal Out empty is. If you move around the Portal In object, or change its scale or rotation or anything, the Portal Out instance won't do anything. Conversely, if you apply transformations to the Portal Out empty object, you'll see its instance of the plane react accordingly, and naturally the Portal In instance won't be affected.

PART TWO: Setting the portal window material.

We're going to be using the Ray Portal BSDF shader:

And we're going to need to use information from the portal window instance geometry:

First, we're going to figure out the right direction each instance needs to "look." The "Incoming" vector indicates the direction from the shading point on the surface. We need to take that, point it in the other direction and then rotate it based on how the plane is rotated. We captured Rotation information from the Geometry nodes, so all we need to do is scale the Incoming vector by -1 and then rotate it according to our captured rotation:

Note the Attribute node is reading from the Instancer domain, so we capture the proper value for each instance. The next thing is to reapply the captured transform (which wouldn't be as messy if we had Matrix operators here):

But we still haven't got what to transform. Naïvely, we'd just use the Position vector from Geometry:

But that produces squirrely results because the ray tracing tends to "see" the portal plane itself. So, we want to bump the position ever so slightly away from the plane by adding a tiny amount of the plane's normal:

Why scale the Normal vector by a negative amount? Because the view direction is in the opposite direction of the plane's normal. If we used a positive amount, the view would then be looking backward thru the portal plane and would see nothing.

You should now have a fully functional portal view. Aim the one plane in some direction and see what it's looking at in the other. Scale or rotate one of them. See the magic! Actually, it would help if you had something to look at. Load a background HDRI or scatter your scene with miscellaneous objects.

PART THREE: Making objects teleport through the portal.

 

 

How I made the Slit Scan video in Blender

While I did this in Blender, there's nothing particularly special about the technique that it couldn't be done in any other package. The only requirement is that the renderer can do motion blur by any means other than faking it via motion vectors and blurring the original image (much like you'd get using Photoshop on a single image).

My rendering: https://youtu.be/MVCQr0uBCeQ

This is done using a pretty old technique called Slit Scan Photography. I hunted around for a description of the process and found a lot of illegible diagrams and vague descriptions. I finally found this video https://vimeo.com/391366006 which is just awesome. Now I know how the F/X folks made the star fields in Star Trek in 1966! The video does an excellent job of explaining the process and should, by itself, give you enough to recreate the effect in your 3D package. Here's a bit of a breakdown of the process:

The camera is moving toward (or away from) your slit plane and exposing a single frame over the course of the entire movement. In order to not get a smeared blur of the source image, you need to move the source while the camera is also moving. You need to consider exactly how the camera is moving so you can "paint" the image in the blurred space the right way.

For my setup, here's how the texture is positioned while the camera is close:

and a little further away:

and all the way out:

If you take notice of the yellow/white splotch with the red dot in it, you'll see it's sliding off to the right in these images. That happens to be up from the camera's perspective. The slide distance is actually fairly significant - about 20 times the distance the texture will advance for the next frame:

As you can see, it's nearly all the way back to where it was for the close-side of the exposure in the previous frame. This kind of animation will produce a result where the smear rendered in each frame appears to be coming at the viewer. Now, which way the texture moves (up or down) doesn't really matter. As long as the next frame starts a little more in the same direction as the advance happens during the exposure, the pattern from frame to frame will appear to fly out toward the viewer. If the start point for each frame is slightly behind the point it was in the previous frame (compared to the slide direction during the exposure), the motion will appear to recede away from the viewer. The distance the source slides during the exposure will control how stretched out the texture appears in the frame. The distance the source jumps from frame to frame will control the speed at which the texture appears to be flying at the viewer.

Blender has a few implementation details (bugs? maybe.) which can make things a bit complicated. For starters, cycles does not render motion blur for anything other than moving geometry. Lights do not blur, nor do animated textures. I first tried to do this by animating the texture's position on the object and all that resulted was big smears during the exposure. This is because it rendered the same texture slice along the entire exposure, instead of slightly different parts of the texture along the way.

I wanted to avoid having the same texture render out the top slit as the bottom slit, which is what happens if you look closely at the Dr Who example given in the video (the sliding texture appears first on the bottom half, and then a moment later the very same pattern emerges on the top half). This requires two different texture planes but now I have the problem of obscuring them such that one is exposed in the top slit, and the other is exposed only in the bottom slit. I originally just split the UV space for the plane and animated the UVs such that a), they were moving nicely and b) were entirely different parts of the fractal so they'd look different. However, see the motion blur bug I mentioned. So, I switched to having two planes and then boolean modifiers to hide the part of the plane which would be exposed to the other slit. Which then revealed the next limitation of Blender - the booleans are all computed before the motion, or maybe sometime during the motion. At random, some frames partially exposed the wrong plane in each slit. So, there was chaotic blinking from frame to frame where sometimes the wrong plane would expose in the top slit, and sometimes it wouldn't. You will need to have the slits far enough apart to avoid this problem, or just live with the result the BBC got when they made the Dr Who intro.

The slit plane needs to be as close as possible to the image plane(s). I have the image planes 0.0001m behind the slit plane. I made the slits using a boolean operator on a simple plane and the slit shapes themselves are simple planes with a Solidify modifier set to 1mm thick. This gives me the flexibility to alter the slits to other shapes as I like.

I made the image planes emissive and quite bright.

I will leave it as an exercise for the reader to figure out how to do the slow fade-in and fade-out.