Light spaces
Which representation options are suitable to express the processes of digitization, the quantification of information and the progressive mapping and measurement of our environment?
More and more digital systems are supplementing classic cartography with spatial and temporal information about the physical environment. Services such as Google Maps, various GIS (geo-information systems) offers from government and private sources or the sheer quantity of freely accessible photographs and videos on the Internet, some of which are published under open source or creative commons licenses, offer a wealth of information of methods that seem almost predestined to represent the glitches being researched.
To depict the urban glitches, we chose techniques that represent an interweaving of analog and digital working techniques. By consciously provoking errors and alienation, images are created with an aesthetic that corresponds to our view of urban glitch.
Photogrammetry
The surveying technique was developed in 1851 by the French officer Aimé Laussedat in parallel with the newly discovered photography. The term is made up of the ancient Greek words [phõs = light], [grámma = writing] and [métron = measure, length, size]. The basis of the technology is the restoration of the original geometry using the laws of central projection. Modern programs use generated ones for this purpose Records from so-called dense point clouds (PointClouds) that are created through laser scans or by interpreting photographs.
A large amount of image material is required as the starting material for this technique. To do this, objects are circled in a hemispherical shape and large areas are walked over in a grid in order to capture the required geometry photographically from all angles. Alternatively, the object to be depicted can also be calculated from individual frames of a video.
When processed using computer programs, the position and shape of the objects are put back into a spatial relationship using algorithmic evaluation processes and interpretation. The high resolution of modern digital photography creates 3D models with several million polygons. Finally, the color information from the photographs is projected back onto the polygon meshes as a rectified texture.
Laser scan
The laser scan is a special form of photogrammetry. Like the processing of photographs for geometric data analysis, the laser scanning process also involves image measurement using light. In contrast to photography analysis, which is an estimation process through the interpretation of image points, laser scanning is many times more precise. For the measurement, the surface is scanned in lines with a laser and the distance of each individual measuring point is calculated using the transit time measurement of the laser beam. The wavelength of the light beam, on which the metric system is based, allows, in combination with the transit time, an exact distance measurement. The technology is often used in terrain surveys through aerial flights (airborne laser scanning).
For our work we used freely available ones Records of the state of Upper Austria, which were visualized using the “Cloud Compare” software.
AI generated images
AI image generators use algorithms for unsupervised learning - learning without a previously known target value and without external reward - for example VQGAN (Vector Quantized Generative Adversarial Network) in combination with CLIP (Contrastive Language–Image Pre-training).
GANs (Generative Adversarial Networks) are systems in which two neural networks react to each other. One of the networks – the generator – creates images or data. A second network – the discriminator – evaluates the results. Thus, the system reacts to itself to improve its results. CLIP is an attendant neural network that finds images based on a text description that is fed into the Generative Adversarial Network. The video and image material generated in this way ranges from grotesque, difficult-to-interpret outputs to high-resolution portraits of people who never really existed.
The production of AI-generated images and videos is possible without installing complex software. For the following images, Katherine Crowson's notebook was used on the Google Colaboratory platform. Notebooks are code that is shared free of charge by the authors and can be executed in the browser.
On the notebook used, code blocks are executed in sequence. Various data sets, so-called models, can be loaded, which are then used to generate images. Words or sentences, so-called text prompts, are entered in another field. It is possible to upload an image and define it as the source image, or enter it as the target image. As soon as all parameters are set, image generation can start. The resulting image is exchanged again and again between the generator and discriminator. Each of these steps is called an iteration. After a predetermined number of iterations, a result is output. Once the desired result is achieved, the process can be stopped at any time. The resulting images can be used to create a video or GIF by lining them up.
The following images were created with the prompt “Blooming Schönbrunn Palace Park” without an initial image and show various iteration steps. Every image generation without a starting image starts with gray noise, which gradually transforms into an image with recognizable content.
If an initial image is used, it is reconstructed in the first step. For this example, the starting image was a photo of the pier of the paddle steamer Schönbrunn used, which was also edited with the text prompt “Blooming Schönbrunn Palace Park”.
Risography
Riso printing is a mechanical stencil printing process, similar to screen printing technology. A film is thermally perforated at the points where the ink should penetrate the paper. Sheets of paper are then passed under a rotating printing cylinder and the ink is rolled on. The basis of Riso ink is rice bran oil. The stencil material, the so-called master film, is also made from renewable, plant-based raw materials.
Riso print machines are similar in appearance to conventional photocopiers and digital printing machines. However, the two removable pressure cylinders are hidden inside.
The manufacturer offers 21 standard and 50 special colors. Since a separate, refillable cylinder is required for each color, which is associated with high acquisition costs, there is often only a limited selection of colors available. However, these can be applied in different opacities or mixed by overlaying.
A separate master foil must be made for each spot color used. A maximum of two colors can be applied in one printing process. The printing process is therefore less suitable for individual items. However, it is ideal for small to medium runs. Riso printing can be worthwhile for an edition of 10 or more.
Up to 150 sheets per minute can be produced using the Riso printing process. Flyers, postcards, brochures or magazines can be produced quickly.
During every Riso printing process, the position of the cylinder must be checked and, if necessary, corrected manually so that the individual colors overlap as planned. Even if the printing cylinders are optimally adjusted, a slight shift can occur.
In our first attempts at printing we used black, red and blue. Before the cylinders were aligned, the shifts in the color planes were reminiscent of anaglyph images.
anaglyph
Anaglyph [from ancient Greek: aná = on top of each other and glýphō = to chisel, engrave, represent] is a stereo projection of images (similar to the stereoscopic process commonly used today in cinema films) by superimposing half images in complementary colors. The technique emerged at the same time as photography and photogrammetry in the mid-19th century. The physicist Wilhelm Rollmann is considered to be its inventor.
Spatial depth is created in the human brain by the differently geometrically distorted visual impressions of the left and right eyes. Anaglyph images make use of this principle to create a three-dimensional image by layering two perspectively offset fields. Special anaglyph glasses with color filters separate the image information for the eyes and the brain interprets the different information as spatial depth. While red-blue channels were previously used for image separation, red-cyan is now mainly used. This has the advantage that cyan contains the colors green and blue in equal parts and therefore includes all three primary colors together with red.
Despite using red-cyan channels, it is difficult to filter blue tones. Both blue and cyan films only optimally filter out a few shades of blue. Red tones, on the other hand, are more easily hidden by the foil used.
Since the blue available from the Riso print machine is not optimally filtered, we decided to use conventional laser printing to print the poster that is inserted into our magazine. The anaglyph images work particularly well in digital form when using red-cyan glasses. You can find all anaglyph images at Anaglyph 3D.