Spring 2021 Reflection

For my Houdini tasks, I focused on the set up of the camera. My time was split between experimenting with the parameters of the camera node, learning how to make the image projected by the camera match the original chorography as much as possible, and investigating how to move the 2-D & 3-D objects in the Houdini environment while maintain the appearance (via scale) of the projected image.

Placing a traced map feature onto a 3D landscape requires two components: 1. Constraining the movement of each object to a “between vector” that runs from the X-Y 2D location of the object (which corresponds to its original pixel location when traced in Supervisely) back to the camera, and 2. Scaling each object so it increases in size proportionately as it moves further away from that camera. For the former, I ultimately decided to use the pivot rotate functions in the transform nodes. The pivot of each object is rotated so that its z-axis aligns with the vector connecting it to the camera. If an end-user wants to push an object out onto the landscape, they will only adjust the “Z Translate” parameter. To adjust the scale, Jake Heller came up with a formula: (Magnitude of ‘Between’ Vector + Size of Z Translation) / (Magnitude of ‘Between’ Vector).

Part of the difficulty in figuring out how to translate the 2-D / 3-D objects lay not just in determining the formulas, but in determining how to access individual objects. For the 2-D objects at least, every object has a separate vector connecting it to the camera and needs to be translated by a different amount. I had originally hoped to store the value of the respective camera vectors as an attribute on each primitive. Unfortunately, I could find no way to access attributes inside of the transform nodes. As such, I had to represent the camera vector by using local variables (like $GCX and $GCY) and float values for the camera’s position. Furthermore, for each 2-D object to be translated by a distinct amount, they needed a separate ‘Z Translate’ parameter, and to be referenced in a distinct transform node. In other words, there needed to be a transform node for the translation of each 2-D object. All of the transform nodes for the 2-D objects are included in the ‘Vector Translate’ subnetwork. As of now, the subnetwork only includes up to 200 transform nodes, meaning only 200 2-D objects can be translated in the environment.

Our final camera node was labeled as ‘Z Formula’. This serves as a reference to how the node uses the width and height of the digital map as parameters in determining the position of the camera. The formulas for the X and Y position of the camera were fairly intuitive; the center of the projected image (ie, the center of the camera) should align with the center of the original map. Determining the formula for the Z position of the camera, however, was … decidedly less intuitive. Eventually, I came up with the following rough formulas:

Camera X: Map Width / 2

Camera Y: Map Height / 2

Camera Z: Map Width * Focal Length / Aperture

Further experimentation with the camera parameters led me to deduce that our Houdini environment works best with a Focal Length-to-Aperture ratio of 1; In other words, the Z position of our camera is equivalent to the width of the original map, though this might not be the case for all environments. This camera formula will still require additional refinement, but we have made great progress since the beginning of the Spring semester.