I went to the Arebyte Gallery to see RGBFAQ.
The Gallery is located on the detached Ballardian microcosm of City Island next to Canning Town station.
A Palestinian flag was flapping from one of the apartment balconies. I noticed that already there was a crack in the paving, suggesting subsidence.
I waited in the dark while the previous session came to an end.
The exhibition is a series of animations projected on to large sculpture. The first three sculptures are visible here: (1) a replica of a V2 rising from a pile of arrows and regular solids, (2) a cube emedded in a plain wall, (3) a group of large regular solids and a torus. Not visible here is the 4th section, shown in the image at the start of this post, of a triptych of screens.
A starting point was indicated on the floor.
A spotlight illuminates the positions where the viewer has to stand for the sections of the show. The colour advances in sequence: white, red, green, blue, white again.
The screen on the V2 shows the orthogonal axes twirling around like the hands of the clock and then we begin.
There is a low ambient drone soundtracking the show but it does not intrude on the narration. We get a potted history of the evolution of computing power as a means to calculate and model trajectories, and then its expansion in to general usage in simulation technology. A collage of thinkers zooms past. At first it threatens to become Adam Curtis but settles into concentrating on a few well-defined themes and exploring them in more detail.
On the cube design we explored all the early work in creating 3-dimensional images using 2-dimensional grids.
The next stage used 1 side of the group of solids and considered the incorporation of actual imagery to give substance to computer-driven animation. In a very rough chronology, I think we were at the end of the 80s at this point.
Going round to the other side of the group, we get up to date with the most recent work in compositing images.
The final section considered the final convergence of the 2 strands of development that had been running through the story so far: the attempts to create simulations of “real images”, and the project of engineering “computer vision” that can comprehend visual input as well as a human percipient. It was mentioned that ethical problems in the use of privately-owned images of real people were being circumvented by training the new vision systems on synthetic images generated algorithmically. It was noted was this process is vulnerable to teaching biases in the range of what “normal” objects look like.
The tone was neither pessimistic nor notably tech-boosting. It was said that computer images are now close to matching photographs… but photographs are not like actual visual impressions, even if colorised. Meredith Frampton’s portraits are a great acheivement of a variety of “realism”, but they are not quite “photographic” nor or they authentically visual (nor blurring away at the periphery). There is a booklet with this show, with plenty of images and I think it has the whole text of the commentary. Here are some pages:
Of course somebody else wondered about what made a picture into a representation some time before: