Model


Code+ Presentation

A video presentation from the Summer 2021 Code+ research team, with the first views of historical maps procedurally reconstructed using the Unity engine.

GIS and Chorographic Views

Chorographies are often created from a culmination of views of the area depicted, with several angles or points of reference being used to create an image that may not have actually been visible from any one position. The Sandcastle team examined this idea of mixed views by using ArcGIS to place several chorographies into modern scans of the areas they depict, and then finding an angle and location that best matched the image depicted with the chorography. 

When the main chorographies the Sandcastle team worked on were imported into ArcGIS, they were paired with Cartesian maps to help align the objects depicted within the chorographies with real locations. The Cartesian maps would be laid flat on top of the modern location in a position where they best matched known landmarks, whereas the chorographies were converted into billboards and placed perpendicular to the surface from a location at which someone could theoretically see a view most highly resembling the chorography. At times, they were located in a place where the chorographer may have been able to achieve or a practical location from which they could have stood. At other times the chorographies were set floating high in the sky, revealing them to have been conjured from imagined, but still fairly accurate views. From their billboard positions, landmarks in the chorographies were then connected to modern landmarks in the GIS view, as depicted by the red lines. These views can be explored below.

Chorographies are often created from a culmination of views of the area depicted, with several angles or points of reference being used to create an image that may not have actually been visible from any one position. The Sandcastle team examined this idea of mixed views by using ArcGIS to place several chorographies into modern scans of the areas they depict, and then finding an angle and location that best matched the image depicted with the chorography. 

Houdini and 3D Modeling

One of the most essential parts of the Sandcastle workflow is the development of 3D assets, which we have begun by using the 3D modeling software Houdini. By inputting annotation data and chorographies into Houdini, the Sandcastle team has been able to convert 2D images into 3D objects which will serve as the basis for future downloadable toolkits. 

Houdini is a procedural system that works by manipulating nodes in order to achieve a desired object. These nodes can also be annotated by attaching notes to describe their effect and purpose. Below is a Network created by the Sandcastle team to describe the main process we used within Houdini. For more information about how this was created, see the Houdini blog post about Assembling the Network.

Within Houdini, users can create and manipulate objects by interacting with various parameters assigned to the objects they work with. The procedural nature of Houdini allows users to alter parameters that impact nodes on a variety of levels, changing the object they are manipulating without having to go back to each node. This allows for quick alterations of an object's size, shape, position, and general composition. The Sandcastle team aims to import parameter-altering functions into the Unreal game engine to provide users with the tools to create their own 3D manipulations without having to recreate the entire Houdini process from scratch.

Most of the Sandcastle Team's current Houdini work has been done with data obtained from the Book of Fortresses, with plans to examine other chorographies in the future. Example parameter manipulations and how those impact the objects they apply to can be seen below.

Houdini allows users to edit parameters to alter values such as height, width, and orientation without having to recreate the object or edit previous nodes.


Camera parameters can be adjusted to fit 2D chorographies to 3D landscapes


2D objects projected by a camera can be manipulated onto a 3D landscape


Objects can be shifted within 3D space 


Objects shifted within 3D space can be adjusted to a camera so they appear the same within the 2D chorography perspective while existing in different positions and scales within 3D space

One of the most essential parts of the Sandcastle workflow is the development of 3D assets, which we have begun by using the 3D modeling software Houdini. By inputting annotation data and chorographies into Houdini, the Sandcastle team has been able to convert 2D images into 3D objects which will serve as the basis for future downloadable toolkits. 

Inspiration for the Sandcastle Project

The Book of Fortresses project is an ongoing digital project that began in 2017 with assistance from Duke University's Undergraduate Research Support (URS) office and the Wired! Lab for Digital Art History and Visual Culture. This project, led by Edward Triplett was the initial inspiration for the application to the NEH's Office of Digital Humanities (ODH) to develop the Sandcastle Workflow via a Phase II grant. Thus far, roughly 1/2 of the 120 perspective drawings in this Portuguese book from 1509 have been annotated and turned into categorized Json data that can be ingested into a procedural modeling software called Houdini. For more information about this source and digital project, and to view a 3D GIS system that has aimed to deconstruct and analyze the book at multiple spatial scales, see www.bookoffortresses.org.

In summer 2020, a small team of undergraduates, including David Mellgard, Emma Rand,  Mohammad Khatami, and Arjun Rao worked on in-depth visual annotation of the 17th century views of London and Lisbon. This project was funded by the Data+ program at Duke University, and was directed by Samuel Horewood. Within these two images, the students labeled more than 8000 objects and object parts using the Supervisely web application. These images, and the data derived from them acted as tests for the Sandcastle Workflow at the high end of the spectrum of map / view complexity.

This project was inspired by previous work on "The Book of Fortresses" as well as research focused on chorographic views of London and Lisbon.

Database of premodern city views

Beginning in the summer of 2020, our team began collecting premodern "maps" that have been described using the following terms: Chorographies, Views (Incl. Oblique, Bird's-eye, Perspective, Landscape), Prospects, Profiles, Portraits, Panoramas

The collection below is a subset of images that were selected for their suitability for visual annotation and eventual translation into 3D environments using the Sandcastle Workflow:

Beginning in the summer of 2020, our team began collecting premodern "maps" that have been described using the following terms: Chorographies, Views (Incl. Oblique, Bird's-eye, Perspective, Landscape), Prospects, Profiles, Portraits, Panoramas. We collected our findings in an Airtable database.

How did we annotate these chorographic views?

The Sandcastle Annotation process can be broken down into 4 main steps:

  • Create labels describing object types
  • Outline a facade in the map
  • Assign the facade pre-made tags
  • Link associated facades together (optional)

Though these steps are generally followed in order, given the fluidity of our process at times we would go back a few steps to add in tools or tags to be used later. Annotation was performed at the facade level, with each facade of an object being individually outlined and given labels.

Creating Labels

To begin, we create labels describing the characteristics we want to assign to the facades we will be annotating. These are made before entering the map, though the label maker should know the general map components before they begin creating their labels. Labels can be further broken down into two components: classes and tags. Classes are the first level of labels, which describe the basic facade or object type. Examples of these include "fortress part," "house part," "people," and "roads and paths." Classes are typically selected before annotating an object once on-map annotation has begun. Once a class has been created, it can then be assigned sub-labels known as tags. Tags are the second level of labels, which describe the characteristics of an object. Using the fortress part class as an example, associated tags included "fortress part type," "facade or plane orientation," "buttressed," and "tower shape." When in the map, after selecting a class and outlining an object, the tags will then appear as options to select in the sidebar. While an object must be assigned a class, it does not need to be assigned any tags, and not all tags listed in a class apply to all facades. For example, a fortress may have a tower but no buttresses, in which case the "buttress" tag would not be selected.

In summary, the first step is to create classes that describe a basic facade/object, and then tags that describe the characteristics of said facade/object, some but not all of which may apply to every specific object but apply to the general class.

Outlining Facades

Once labels describing the facades have been created, the facades can be outlined. This step involves first selecting the class the facade belongs to, and then simply tracing around the edges of a facade, capturing the image within. In Supervise.ly, this is typically done by using the "Add bitmap" tool, placing a single point within the facade using the "brush" tool before moving to the "Polygon Fill" tool to complete the outline. Classes are assigned specific colors, so the outlines facade will gain a translucent overlay of whatever its respective class' color is. Once the outline has been completed, the checkmark in Supervise.ly's top bar can be pressed, populating the side bar with various tags assigned to the class.

Assigning Facades Tags

Once a facade has been outlined, tags belonging to the class the facade has been assigned will appear in the sidebar. Whichever tags apply to the facade can then be selected, tags that may apply to the class but not the exact facade that has been traced being left unselected. This assigns certain attributed to that facade, which can be exited out of by reselecting the checkmark in the top bar. No other steps need to be taken, and at this point base annotation is complete. If the outlined facade is reselected, all tags will appear.

Linking Associated Facades

If several facades belong to the same object, they should be linked together. An example of this would be the three house parts - facade (wall), facade (wall), and roof - making up a single house. By linking facades, they are grouped into what a computer can recognize as a single connected object. This is done by selecting the "link with other object" icon listed next to one of the objects needed to be linked, which will open up a menu of other objects (in our case, facades) that the facade can be linked to. For objects consisting of three or more facades, all facades should be linked to a single main facade. If a facade is not part of an object with multiple facades, this step is not necessary.

The Sandcastle Annotation process can be broken down into 4 main steps - create labels, outline a façade, assign the façade to tags, and link associated facades together.

What is the Sandcastle Workflow?

The Sandcastle workflow will do the following:

  • Step by step instructions to help scholars repeat each stage of the workflow - from annotating images using the "Supervisely" web application, to manipulating parametric models of map/view features in Houdini, to navigating the environmant in the Unity game engine.
  • A downloadable kit of "Houdini digital Assets" - simplified parametric models of common map features that are driven by the original 2D visual annotation work in Supervisely
  • A toolkit for the Unreal 4 game engine titled "Sandcastle" containing a set of interactive tools to help orient viewers inside of the 3D mapping environment and
  • A database of premodern "Chorographic" images with a focus on the cities of London and Lisbon
  • An annotated bibliography of articles and books on a variety of topics related to premodern city views, including critical cartography, chorography, borders and frontiers, city walls and fortifications, urban morphology, and the material culture of mapmaking
  • Example 3D scenes derived from the inspiration for the project - a 16th century book of Portuguese fortress drawings known as the Livro das Fortalezas (Book of Fortresses)

The technical aspect of the Sandcastle workflow begins with annotation, performed in a software called Supervise.ly. Annotation allows us to categorize and identify each object in a map, producing a set of clipped images and data that can be used both for 3D Visualization and statistical analysis.

After a chorography has been annotated, it is imported into a program called Houdini where it can be transformed into a 3D landscape. This is performed on both the object and the chorography level, with the goal being to create a kit of digital assets that can be transported into the Unreal game engine.

Once a chorography's major features have been identified, the base image can be imported into ArcGIS for georectification. A billboard of the chorography can then be placed in a modern scan of the area to identify possible perspectives from which the chorography may have been created, or based on.

Outputs from "the Sandcastle workflow" includes both a technical workflow to help scholars create 3-D visualizations of premodern maps and views, as well as a research workflow that includes annotated bibliographies and a database of premodern maps and views.

Below is a simplified diagram of the Sandcastle workflow

Student Work