A Close-up Look at Cultural Heritage in Deep Space 8K

screenshot-tikal-reduced_886x590,

The new Deep Space 8K set to premiere on August 7, 2015 is setting a new international standard for high-definition graphics: 8K resolution at 120 Hz and in stereo 3-D, displayed on two 16×9-meter projection surfaces on the space’s wall and floor. Nevertheless, to take full advantage of the breathtaking clarity that all this makes possible, it’s not enough to simply upgrade the hardware. The content to be presented in the new Deep Space 8K also has to have what it takes to deliver the ultimate visual experience to audience members. But seeing as how hardly any videos and applications have been produced for 8K resolution up to now, Ars Electronica Futurelab staffers had two options: develop their own content for the new Deep Space 8K or prepare preexisting content to meet the high-end demands of 8K resolution. They did both.

Material in the “Cultural Heritage” lineup enables spectators in the new Deep Space 8K to experience statues, buildings and entire architectural ensembles as three-dimensional visualizations. A non-profit organization named CyArk uses a 3-D laser scan process to digitize world cultural heritage sites. The result of this procedure is a huge quantity of points referred to as a point cloud. To visualize these point clouds in the new Deep Space 8K, Roland Aigner of the Ars Electronica Futurelab developed an improved point cloud renderer. The high quality of the 8K resolution and the 3D representation combined with the improved point-cloud renderer awakens the feeling of being almost able to touch the statues, the buildings and the entire architectural ensembles.

We recently had a chance to chat with Roland Aigner and he has revealed us, what visitors can expect in the new Deep Space 8K because of the improved point-cloud renderer.

Roland, what exactly is a point cloud?

Roland Aigner: Digitizing physical surroundings at a particular location via 3-D laser scan produces a huge assemblage of points—more specifically, 3-D spatial coordinates that describe the points’ positions. The entirety of these points is called a point cloud. A somewhat imprecise comparison in a two-dimensional context would be to pixels that, together, constitute a complete picture. If you visualize a point cloud in a virtual space such as our Deep Space, the result is three-dimensional representation of the scanned environment consisting of individual points. In this way, humankind’s cultural heritage can be depicted virtually.

screenshot-mesaVerde-reduced

Credit: Roland Aigner, CyArk

The Cultural Heritage program has been a mainstay in Deep Space all along. How will the improved point cloud renderer in Deep Space 8K change this?

Roland Aigner: First of all, the performance of the hardware has been improved considerably, which, in combination with newly-developed software, enables us to achieve much higher image resolution. Plus, we can depict much larger quantities of points. Due to the resulting higher point density, the objects can come across as solid and spectators tend to no longer perceive the individual points as such. In addition to that, we’ve implemented various algorithms and techniques that make it possible to depict these huge quantities of data in real time. A few of the point cloud sets that were spectator favorites in the “old” Deep Space will remain on the Deep Space 8K program—for instance, Angkor Wat, the bets-known portion of the world’s largest temple complex, and the Leaning Tower of Pisa. Previously, it was impossible to screen them at the highest achievable resolution because of the hardware’s limitations. Now, visitors can still enjoy these images but at consummate quality.

Moreover, we’ve acquired new data sets from the Centre for Digital Documentation and Visualization (CDDV) from Scottish Ten program that are considerably more detailed than what we had before. These point clouds have a much higher degree of detail, which correspondingly enhances viewer experience in the new Deep Space 8K. Scottish Ten is a project subsidized by the Scottish government for the purpose of documenting UNESCO World Cultural Heritage sites. At present, they’ve scanned five such sites in Scotland and five in other countries, and we’re pleased to be able to present the first three of them in Deep Space 8K. Since 2009, we’ve been working together with CyArk, the organization that put us in touch with Scottish Ten. The results of this documentation process will be presented to the public for the first time in Deep Space 8K.

We’ve also implemented some new features like virtual camera pans that significantly simplify navigation in the visualizations, and this makes it considerably easier for our Infotrainers to get across the information they aim to impart. Up to now, navigation with the iPod wasn’t very convenient—it really took a lot of concentration. So we made every effort to enable the Infotrainers to focus more strongly on conveying the actual content. For example, in each data set, we’ve defined a few fixed points in the visualization, so now all you have to do is push a button on the iPad and you automatically arrive at this particular point without having to go through manual navigation.

IMG_4485

Credit: Magdalena Leitner

You said these point clouds have a much higher degree of detail. What does that actually mean?

Roland Aigner: In the old depictions, the distance between the individual points in the point cloud was approximately 10 centimeters on average, which means that the clouds were very “thin” in certain spots and viewers could see between the points. In the new Deep Space 8K, the points are grouped more densely and displayed by means of a special rendering procedure for point clouds. As a result, the depictions come across more like solid objects. In some of the new data sets, the interval between the individual points is only one centimeter, which means, in turn, correspondingly enormous quantities of data that have to be managed.

So, how did you respond to the challenges posed by huge amounts of data in Deep Space 8K?

Roland Aigner: In fact, the degree of detail of some of the scans we’ve recently acquired is too high for real-time depiction. After all, the actual mission that Scottish Ten and CDDV are pursuing in digitizing cultural heritage sites is detailed documentation and not three-dimensional depiction in real time. In short, these scans weren’t produced with our needs in mind. So that’s why, for data visualization purposes, we have to reduce the resolution a little bit to the precise point at which the hardware and software are smoothly compatible and we can get as much out of these images as possible. The original versions of some of these new data sets have half a billion points, which is simply too much for our purposes.

In our setup, we always have the majority of the data stored to memory, which is why we first have to take this gigantic quantity of data and reduce it, and then apply various algorithms such as so-called view frustum culling so that, instead of working with the complete data set, we only have to process the points that are visible on the projection surface at the moment. Furthermore, a level-of-detail implementation makes it possible to vary the depiction’s degree of detail. In other words, the entire data set is divided up into individual segments and the further away the audience is from a particular segment, the lower the degree of detail that segment is rendered in. This eliminates unnecessary rendering of details that wouldn’t be clearly visible anyway due to their distance from the spectator.

IMG_4473

Credit: Magdalena Leitner

What does “real-time” mean in the term real-time visualization?

Roland Aigner: Real-time basically means that the depiction of the images that appear on screen takes place during the presentation at a certain speed and isn’t rendered in advance as in the case of, say, a animated film shown in a movie theater. Another example of real-time rendering is in computer games. In conjunction with computer graphics, “real-time” is an elastic concept, and some people prefer the term “interactive” since what’s basically called for is rendering the individual image fast enough to be able to depict the application interactively. In a real-time visualization, you can use, for example, a gamepad or, in our case, an iPod to navigate in the application and modify content while it’s running. In this connection, it’s essential in many applications for the update rate [rate of image refreshment] to be high enough to make movements appear smooth and thus to prevent jerkiness or flickering. Nevertheless, the update rate that’s necessary to depict visualizations in real time isn’t precisely defined. I’ve come across companies in the product visualization sector that are satisfied with 10 Hz since they’re strongly focused on photorealistic representation. In the computer gaming industry, the rate is typically 30 to 60 Hz, depending on the genre. But the frame rate that we aim to attain is 120 Hz, which means that each individual image that we screen has to be rendered by the graphic cards [controllers] in 1/120 of a second. So, in 3-D stereo mode, we achieve 60 Hz for each eye, and, for instance, in a camera pan shot, viewers perceive the movement as totally smooth.

The opening of the Deep Space 8K is on August 7th at 6:00 PM. On the subsequent opening weekend, Saturday & Sunday, August 8-9 from 10 AM to 6 PM, the Deep Space 8K starts an extensive program, where you shouldn’t miss Cultural Heritage.

, , , ,