
Render of Dealey Plaza based on GIS data and SketchFab model by “Visualiser”.
Thinking about JFK Reloaded as a model for interactive, immersive journalism, a logical next step was to explore how the space of possibility could be recreated. In JFK Reloaded, Dealey Plaza has been modelled using a game engine that is now visibly outdated. The original design of the plaza from the game cannot be extracted as a basis to build upon. I used several approaches I’ve been taking to create 3D models to produce versions of Dealey Plaza in Dallas.
The main ones were:
1) modelling Dealey Plaza architecturally from scratch from a plan of the area in SketchUp
2) Amending found models of Dealey Plaza in Google’s 3D Warehouse,
3) Creating a model of the contemporary Dealey Plaza using geographical data available from open-source databases (OpenStreetMap), and
4) photogrammetric models reconstructed from drone footage.
These different approaches had different levels of utility and success. The goal was to develop workflows for creating large open spaces that could be used for immersive storytelling and journalism. Hypothetically, the final product of that research will be a mixed approach (which I’ll consider below).
Feelings
My initial wish with this project was to extract the space built by the creators of JFK Reloaded and load that into a level authoring tool. That might have been a more manageable task had JFK Reloaded been built on one of the ID-tech gaming engines or the Source engine used for Half-Life. As it is, JFK Reloaded is built on a more obscure engine, the engine used for Carmageddon. Since I built my models of Dealey Plaza, I’ve come across some tutorials for extracting spatial data from Blaze Render (BRender) engine-built games – the engine used to create JFK Reloaded. I may revisit that option in the future. As for which approaches were the most successful, each had its elements of advantage and disadvantage, but some were distinctly better than others.
Photogrammetric capture of Dealey Plaza from drone footage.
Starting with the least successful approach, I tried to build a model of Dealey Plaza from scratch in SketchUp, and I got quite a way into it. The results are not terrible. I used an underlying plan of Dealey Plaza, which is contemporary. I built some of the buildings in SketchUp using SketchUp shape and line tools. It was very time-consuming to do that, and I quickly fell back to a plan b, looking for, and indeed finding, prefabricated elements to drop into the map. Using online databases and 3D model repositories is a workflow that content creators who are journalists are likely to gravitate to if there is ever to be a rapid development workflow for immersive journalism in volumetric spaces.
However, access to those databases is fragmented across several platforms (3D Warehouse, SketchFab, the Unreal Asset Store, TurboSquid and so on). Some elements from those platforms are free, and some are not. So, until there is a single platform with access to comprehensive geographic and realistic resources, it will continue to be a complex task.
The most satisfying approach was using GIS data from OpenStreetMap to generate buildings within a geographically located area. This can all be accomplished within SketchUp using tools built into the software and free plugins. The outcome from this approach is variable depending on the area we’re trying to recreate. Because we were trying to recreate Dealey Plaza, it was straightforward because there is existing building data in the open-source database (it being an urban district of significance). Finishing off that model was a case of filling in areas with textured photographic maps. The results look comprehensively better than some of the other outcomes, and the process is undoubtedly more rapid than building from scratch or using off-the-shelf elements, mainly because not all elements were available.

Enscape Render of Dealey Plaza based on GIS data and SketchFab model by “Visualiser”.
Evaluation
While every approach has some value, the one that is worth continuing to pursue is using drone footage and drone footage combined with LiDAR scanning. In particular, the tool PolyCam has a feature that enables users to upload video content, which it then algorithmically processes as individual frames to create photogrammetric models.
Using drone footage of Dealy Plaza, we generated a photogrammetric model of the contemporary setting. With further work, this photogrammetric model could be improved by importing it into a third-party modeller or photogrammetry processing tool like Blender or Reality Capture. If better models are available, it is possible to replace some buildings, work on some elements that don’t convert well photogrammetrically, like fully grown trees, and remove glitches (that were abundant in the final model).
Application
An issue with taking a purely photographic approach that emerged was how difficult it was to capture external spaces. However, in reporting, we need to be able to put individuals into external spaces for immersive journalism to have any utility, so we must find ways to develop exteriors rapidly. This exploration demonstrates that there are methods and ways of rapid development using consumer-level software and tools that are within the reach of content creators. Combining geographical data with photometric data is promising and could be a future workflow.
The best approach would be to combine the three most successful attempts: photogrammetric capture gives us the height of buildings and photographic textures to work from; open-source geographic data gives us accurate positioning, if not always the height or configuration of buildings; and architectural modelling enables us to create precise and aesthetically pleasing results that can be imported into game engines.
Conclusions
Working on this concept in several different ways was enlightening. It was helpful to see the utility of various approaches, their outcomes, and their qualitative value. There is an argument in considering immersive journalism from an academic perspective around the authenticity of computer-generated volumetric space. Photogrammetry offers us one of the most straightforward routes forward in this regard, as does the use of open-source map data. Journalists already gather data from external sources and then compile that data together. There is an analogy to be made between the work that a journalist does as a news gatherer and the work that an immersive journalist might do, pulling together spatial data from different sources and arriving at a curated aggregate and authentic outcome.