Theatre in the age of intelligent machines

So this is an essay I wrote just for fun around 2001. Looking back it’s easy to see that this is an MA’s worth of thinking and work, but I did it out of hours over maybe four months while working my first internet job. I either didn’t have much of a life or I was very interested in what I was writing about - let’s chose the later. Bits and pieces of it are in note format, but there is still a lot in here…

  1. Introduction
  2. Official and after-hours concepts of Information & IT
  3. So Who Cares About the Network?
  4. Theatre information – how far does it go?
  5. Part One: It Lives?
  6. Computer Theatre
  7. Embodiment and Interface
  8. Artificial Intelligence
    1. i. ‘Classical AI’
  9. ii. From Classical to Behavioral AI
    1. iii. Believable Agents.
  10. v. Affective Computing
  11. Artiicial Life – Spirit, Genetics and Conditions of Being.
    1. i. Joy de Vivre
    2. ii. The early beginnings of ‘The Game of Life’.
    3. iii. The Information Imperative & Evolutionary Computing.
    4. iv. The ACTOR: a scorecard.
  12. Part Two: ACTOR beware
  13. The ACTOR as Problematic Playmate
    1. i. The ACTOR is a performer too…
    2. ii. Problem One: Brenda Laurel and the Aristotelian Construct (Among Others).
    3. iii. Problem Two: The Virtual Reality Engine is another Male Self-suppressive
    4. iv. Problem Three: The Virtual Class Politic and Recombinant Techno Culture.
    5. vi. The Anti-ACTOR.
  14. So where does the Theatre stand in all this?
    1. i. Antidote One: History and The Reality Check
    2. ii. Antidote Two: The Positive Politics of Contested Realities and a Genetic Politic.
    3. Todo list of topics for here
    4. iii. Antidote Three: Techne
    5. Antidote Four: Deal with Evolution
  15. Conclusion: a theatre in formation.
    1. ACTOR
    2. Networks
  16. Problems and Antidotes
    1. Antidote Five: there is no antidote.
  17. Notes


By examining the complex way that concepts of information and theatre interact, I hope to show that there are rich and exciting veins of possibility in contemporary theatre practice that must be tapped, both for the ‘theatre in formation’, and the information-society in general. In the brightly lit halls of IT (information technology) research institutions the world over, the notions of theatre and drama are used daily – and often reductively – to support or define our interaction with information, while Theatre and performance groups sometimes play around the edges of this same technology with an equal level of naivete, often never quite getting to grips with what’s important about IT. I hope to show how is driving a new communication and cultural paradigm, how it uses theatrical or para-theatrical constructs to aid this drive, and how theatre and performance sit in the mix. I end up by looking at areas of inquiry both for IT and theatre that would redress some deficiencies and critically recombine their discourses.

In considering the mesh between theatre and information, there’s quite a complex set of ideas and technologies that we need to keep in mind without even paying any attention to any argument I might come up with, so here’s the basic layout of this paper: First we’ll look at what information is, then briefly look at how information technology is (or isn’t) making it’s way into performance. From here we’ll take in some ideas relating to paradigms and models, particularly as they relate to info-culture and the theatre. Then, for a more extended period, we’ll flip over to the other side of this and look at how concepts of performance, theatre and drama receive explicit usage in the development of advanced information technologies – and take in a vast number of baffling acronyms on the way. To do this we’ll engage in a Frankenstein experiment realigned for the new millenium. Some analysis of the monster that we create will lead us to look at some general problems with IT, and it’s borrowings of theatrical constructs. Having finished this we’ll then try our hand at looking for some ways to constructively cross-fertilise the two fields of theatre and information, and see why the examination of information technologies and performance is a vital, fascinating and necessary undertaking both from the perspective of the theatrical practitioner and the information technologist.

Official and after-hours concepts of Information & IT

This concept of information is slightly tricky, largely because of the wholesale use of the term information technology (IT) to cover a multitude of communications, network and commodity items, as well as the protocols that they all operate on.

In one sense, information is a fuzzy point on a sliding scale of communication that starts with pure data (the squawk of a modem), continues with the assembly of a inferable construct (intelligible words on the screen), them moves from knowledge (what those words mean and might be communicating) through to wisdom (the apprehension of meaning through insight). It’s a fuzzy point because one persons information is another persons insight; stock prices in raw form mean nothing to me, but may mean a fortune for someone who’s interested).

In a more technical sense, there is also information science which deals with things like signal strength, compression of data, network protocols, cryptography, noise and error correction. This technical aspect of information was kicked off in a article by Norbet Weiner {}, which also inspired the science of cybernetics; ‘The study of the theory of control systems with particular regard to the comparisons between machines and the nervous system of animals’. (Chandor 1977) Cybernetics underlies much of the radical thinking that we’ll be coming across in the course of this paper, and, considering that humans are animal too, provides the prefix ‘cyber’ that has given us much marketing hype over the last decades, from cyberspace to cyberglove to cybersex. However, information in this context is defined by formulae and really is the domain of the ‘hard’ information scientist. It is perhaps best summarised as ‘control’.

There is a third way that we look at information which is less formalised but never the less persistent. We are more and more tending to think of information as the means and product of a Network. So if, in the strict sense, information passes through a ‘control’ phase as it rushes around the world, and ends up being translated back into communication for an end user, in an everyday sense we automatically associate the concept of information with the idea of a Network: And the idea of the Network is both the people in an interest community (‘wetware’) and the network as distributed communications channels (software, protocols and hardware). This third associated notion of information, the looser idea of a Network and why it’s so important, is something that we’ll look at in much greater depth shortly.

IT is a label that straddles all of the above definitions; it obviously refers to the convergence of networks, information devices such as the computer and the obtaining of data, but the implicit promise is also that it will be information in the communications sense that you’re going to get. That is, a leg up on the way to making knowledge and making decisions – typically to gain a business edge. So IT both enables, and gives priority and status to, the collection and dispersal of information as commodity. Of course IT is big business, and the spread of IT seems to be somewhat invasive; from being just another way to not be able to talk to someone (the cell phone) to questions about whether we really want a ‘networked’ home or, more importantly, questions of equity and the increasing info rich-poor gap, the future of IT, considered just on its own, effects us all.

So Who Cares About the Network?

Of the three defining aspects of IT that I’ve listed above, communication, control and network, it is the last that is least understood as a general concept. We have a good sense of what communication and abstraction are from a general analysis of telematic media that commenced in depth with McLuhan, and control as a general principle is something that we have seen as a complex concept from the writings of theorists like Foucault and a science that finds it nadir in the armaments industry, but the network has not yet received it’s due. There is plenty of speculative writing about what a network might do and be (Gibson’s Neuromancer is the obvious example here) and the idea has gained a certain useful life in the ‘paranoia’ genre of film and TV (The Net, Enemy of the State, X Files…), but at the same time the Network has many more resonances and earthy implications. The network is going to turn up a lot in this essay, it’s the common riff that holds these weird and apparently contradictory modes (information and theatre) together.

Manuel De Landa in his book War in the Age of Intelligent Machines (1991) is an excellent book that takes the network very seriously – as a fundamental part of the War Machine. De Landa spends quite some time in the book examining the idea of the Network, and suggests that since the enlightenment, most of western civilisation and it’s discontents can be loosely arrayed around three types of machine paradigms, or ‘machinic phyllum’[ ‘Machinic Phyluum’ is, as De Landa acknowledges, a term originated by Deleuze and Guatarri in ‘One Thousand Plateaus’, where it is used]: clockwork, motor, Network. I’ll run through this briefly, as it gives us a good platform to look at the development of society in general, and the development of ideas about dealing with information in particular.

Very briefly, clockwork has to do with the acceptance of fate, God winds the spring of man and a life is played out according to preordained rules, and, later, to do with scientific accuracy and the confidence that everything could be measured and placed. The clock happens to coincide with the rise of the printing press, which helped in the displacement of geometry by numeracy[ Until the rise of the printing press it was just as easy for a scribe to deal in diagrams or letters, and the manuscripts that dealt with the motions of the planets and geometry were rendered diagrammatically, both because there was no algebra to speak of, and no good reason to use any other method. With the press, however, the diagram was laborious and tedious. ‘Luckily’ the rise of algebra meant that diagrams were replaced by equations, to the point where it is hard to imagine that most of the Greeks knew no math; it is significant that we learn about ratio and Pythagorean math in school, because that was it; a further understanding of anything from Ptolmey to Galileo requires only geometry.], but it’s the adoption of formulae that makes the accuracy of the clock useful, moving the principal scientific paradigm from ‘place’ to ‘pace’ (Bailey, 1996). Principally the ‘killer app’ for the clock and the equation is navigation, the key empire-building technology of the day. Descartes used the paradigm of the formulae to motivate his ‘Discourse on Method’, and the whole business of knowledge as a sequential endeavour was begun.

With the invention of the steam engine, a new type of energy and identity was given to the world. While a clockwork mechanism directs energy along a predetermined path until new energy is imparted to it, the motor produces new motion within itself, it uses a form of difference to extract energy from a reservoir in accordance with a circulation diagram. This motor paradigm, De Landa suggests, is the discovery and the advantage of Napoleon. The Emperor used the railway to rapidly move troops to points of weakness, and army units were empowered to renew their orders independently of a prime source. It’s interesting that this new impetuous relies on soldiers who are given more responsibility at a time when, fuelled (literally?) by the revolution, the proletariat are gaining more say in politics, making their patriotism the reservoir of the army.

Clearly, the motor model also implies a new political and personal model. The writings of Rousseau on education become wide spread and it becomes commonplace for people to reinvent themselves, inserting themselves into the power machine where they serve best, rather than where they were born to. This, obviously connotes the Romantic ideal: Ideals become the individuals reservoir, the gap between the ideal and the actual is the difference and personal action according to value systems is the circulatory diagram. It is only in this climate that a modern theatre – and a modern psychology – can assemble itself, and it is not surprising that realism is the foremost call, for it is in the real that the trial insertions of the new individual are made; theatre becomes the workshop of society, with the individuals choice becoming an intense act of freedom. It should follow then, that a modern depth psychology can also be seen to be mapped by the motor paradigm: reservoir = subconscious, difference = repression/ego, circulation = Oedipus complex.

Continuing with De Landa’s warfare theme, the second world war is the next paradigm shift marker: At the beginning of the war, Hitler’s use of radio to form a loose and decentralised war machine resulted in the terror of the blitzkrieg. The use of a flexible attack force where each ‘node’ is organised by the network of a radio in real time is, if we can ignore the fact it’s a thing of war, brilliant. It is exactly the same idea that is behind many of our current business models: ‘Just In Time’ companies use a network (radio, fax, internet) to coordinate their sales and delivery strategies. The whole thrust of ‘downsizing’ is about replacing a business hierarchy with a business network, but, unfortunately, someone recognized that a business network is ‘cheap’ and pursued that side of the deal relentlessly.

It is perhaps significant that WWII also gave us the technology that is fundamental to our topic of IT and the theatre, the programmable computer. Alan Turing, the brilliant mathematician who broke the Enigma code, is a key figure here. He not only developed the idea that formed the theoretical basis of the programmable computer, but he also advanced the radical idea of computers posing as humans: The first was a class of machine, the ‘Universal Turing Machine’, which formed the logical underpinning of the modern computer. It was a simple theoretical device that had the conceptual ability to read or write a black or white dot to an infinitely long piece of tape[ the ‘Turing Machine’ is worthy of a whole book in itself. The startlingly simple theory of how a device, consisting merely of a writer that could write a dot or erase one, a reader that could tell if a tape was marked or not, an infinitely long length of read/write tape and a very simple look up table, could be used to create a machine that could imitate any other machine. In it’s essence this is exactly the strength of the computer: it is not much by itself, but as a machinic chameleon, it is supreme.]. It was soon enough that the first practical implementation of this idea was switched on; the ENIAC electronic computer replaced the 5000 women who calculated one trajectory of one bullet per day each with somewhere around 80,000 vacuum tubes. The second was the ‘Turing Test’, a simple affair for judging whether a machine is making an adequate simulation of the human mind: An interrogator asks questions of a person and a machine, the machine passes the Turing Test if the interrogator cannot distinguish between the two. Turing thought that a machine would pass this test in a few years. Unfortunately, at the time of his early death in 1954 (suicide after being dumped by the English Government for being gay), he was not to know that, in fact, the Turing test is still a long way from being passed[ Both the Turing Test in particular and AI in general suffer from the ‘Frame Problem’, first identified by McCarthy and Hayes in 1969 in ‘Some Philosophical Problems from the Standpoint of Artificial Intelligence’. The problem – at least from the computers point of view – is that as humans engage in dialogues, they make a great many automatic voluntary constraints in their beliefs: Thus a proposition that is true at one stage of a dialogue, may be ‘framed’ to be false at another, depending on context. Seems kinda obvious. This is the underlying reason why AI’s biggest coup is ‘Big Blues’ defeating of a chess master. The rules of chess, compared with the loose structure of language and discourse, are miniscule, and, while I’m not saying chess is easy, it relies on permutation and assessment of possibility. ]. However, the Turing Test threw down the gauntlet that sparked the rise of the discipline of AI, which, until the last twenty years or so, was considered to be the holy grail of computer development.

It was to be quite some time before computers became nodes of a Network. While we take it for granted that ‘computer plus modem equals internet’, the development and use of a computer network was the domain of a privileged few – those who had access to and the understanding of the vagaries of the digital computer. What is now the Internet is the progeny (there are more on the way) of an academic network started between Universities and later the defense forces[ This is contrary to popular belief; the Internet was not built specifically to decentralise communications protocols because of the possibility of nuclear war, though it did serve very well…]. A few companies, like IBM, had their own interstate networks, but it wasn’t until the PC became a relatively cheap hobbyist device that the network became ‘local’ – and it was about this time that speculation about the role of a network as an information device (not a War Machine) took off. The idea of an communications network was nothing new, the telephone companies had been making networks for years, it was the coupling of these telephone networks with information devices like the ‘router’ and the personal computer that was radical: Providing I could foot the bill, I could dial up a ‘friendly’ computer anywhere in the world and share information. And then, with the Internet, a ‘network of networks’, I could dial a local number and share information with any number of other users in real time. It’s no wonder that the Internet in it’s early days – late eighties, early nineties – was a byword for democratic and altruistic ideals. Trying to contemplate the size, complexity and spread of the Internet is now impossible and it’s now much easier (and more important) to see where it isn’t[ It isn’t in many ‘developing’ countries. In his book ‘On Being Digital’ the other wise interesting Nicholas Negroponte says that the real issue of have and have-not is one of age and not economics. This is pure self-delusion nonsense. Technofear/stress is as much an issue in children as it is in the aged, and there are many aged who are very info-progressive.].

The key notion of the Network is that it moves away from the idea of a centre, whether that centre be the government, the CEO, the Mainframe computer, or the charismatic leader, to name just a few undermined ‘centres’. To relate this to performance, the development of the Stanislavskian ideal of acting needs the idea of a centre to function. A good actor is often said to be ‘centred’, and derives presence and truth from an honest connection with their, and their characters, psychological and physiological make up. The building up of a part, from deriving the super objective which lies at the heart of a person, right through to the actions that make up their progress through the world of the play, involves a careful layering of evidence from the play, an idea of the performance located in the directors vision, and a deep knowledge of human nature all round. This technology of the self is then supported and extended by a careful use of stage machinery, to create a unified whole. Again, in dance, we can see that the interest at the beginning of the century has to do with finding a flexible centre, and allowing feeling (for want of a better word) to flow into movement from it.

Of course these kinds of performance are still with us, even more so on the screen, but have been seriously questioned to the point where we can safely say that such performances are an example of, and not the only kind of, contemporary practice. The recent development of Performance Studies takes away the privilege of The Theatre as the site par excellence of performance communication. At least in the Australasian context, it’s possible to see the idea of a modern center and a postmodern displacement manifested in the structure of the production practices of both kinds of groups. The theatre companies, while no longer ‘companies’ in the sense that we might have meant thirty years ago, still reside in identifiable locations, structure themselves around an artistic director, and rely to a degree on year-round centralised funding. Alternative groups are often structured around the performers, occupy spaces that are temporary, and go for one-off grants. Contrast the Melbourne Theatre Company with NYID or Ranters, or, in New Zealand, Downstage (even in it’s reduced mode) to Trouble. To a degree this is a simplification, but you can see that the idea of a Network has a life here, and is not restricted to advanced information technologies – or a facile attitude towards an opening night party.

Theatre information – how far does it go?

We are not used to thinking of the Theatre as IT, or as an information medium at all. This is largely because the performance making process pivots on the broader notion of communication, of which information is only a part. However it is possible to separate out the aspect of information in Theatre to some degree. Taking something which we are very familiar with, the realist three-act play, the idea of information is present in several forms: The script is pure information in that it is what the inferences of the production are derived from, and that which the performances are initially based on. In the production of the play part of the actors and directors task is to clearly flag important pieces of information that the audience need to make their own inferences from. It’s not uncommon to vocally hype, change a rhythm in music, or mark some clear shift to give a lift to certain lines. Certainly in every play there are lines that will need to be repeated when someone coughs over them, and others which are less vital from an informational point of view; the ‘Blow storm’ speech in Lear is much more about tone and emotive force than information, but it rests on top of an information structure without which the speech would be less powerful. Of course what we’re really talking about here is structure and plot points and all that stuff, but you can see that we can shift focus quite easily to look at these things in a slightly different way. And I’m going to proceed with something of this viewpoint intact; while theatre and performance are not just another information device (given that they are so concerned with emotion, affect, stories, characters and emotions), if we artificially restrict our conception of theatre to an IT we can go quite some way towards our grander goal of seeing something of the theatre in formation.

Looking at the commercial theatre is a good place to start to see how far IT has made it’s way into performance. Leaving aside the aspects of organisation and publicising a ‘show’, IT’s only real move into the commercial theatre is in the area of control technologies, most particularly the lighting board. The computerised lighting board is IT in that it provides both the means to control the lights and the interface for the designer/operator to infer knowledge from. It also has that level of abstraction that is a sub-characteristic of IT; a removal from understanding quite how things are accomplished at material level (there is no way to ‘fix’ a electronic circuit, you throw it away and start again), which, however, frees the user to make the knowledge inference. With the lighting board, the knowledge inference is the process of allowing the user automate aspects of lighting changes (control), and then allowing the whole show to be run on auto from exterior events (network), or letting the operator control many fades and transitions with just one or two sliders (communication). The inclusion of telematic devices like live projection, television and the development of a generally image-based theatre is a part of the whole thing, certainly they are crucial in the development of the idea of ‘Network’ in a performance setting, but keeping to my narrow focus, a little beyond my scope here unfortunately.

Aside from this, there is little that has changed in principle from the restoration in the main-stream theatre: Electro-magnetic means of staging have replaced the purely mechanical in some instances, electricity for light and sound is a large step, but only one that exaggerates the effects that were already available before their introduction. The theatrical use of lasers is probably as close a thing as we have to a real shift, but their use is occasional and more suited to opera or dance. We really need to step aside from the commercial theatre in order to see something new going on with regard to IT. And, despite a few efforts otherwise, the main thrust of IT’s insertion into the theatrical is in research programs both of the theatrical and IT kind, not active theatre groups – though there are notable exceptions like Troika Ranch and Robert Wilson[ The big names here are Stanford, MIT, Carnegie Mellon, The University of Birmingham, De Paul, all universities with either enlightened computer science departments or an active role in the development of Artificial Intelligence.]. It’s ironic that the leading developments in theatre information are happening quite some distance from an active performance culture; areas, however, where the component parts of theatre are dealt with in depth, but not holistically by any means.

Part One: It Lives?

Computer Theatre

We’re going to start to look at what these research projects are examining. Some of their projects will seem bizarre or stupid, but often the outer form is merely a ‘proof of concept’ (or a node for funding). However, the main thing we’ll see is this: whether the motive is commercial, artistic or technological, the two fields of theatre and IT are not mutually exclusive, and much effort is going into understanding how they might interact with each other.

Our jump-off point here is recent research at MIT. A Doctoral student, Claudio Pinhanez, has written a short essay titled simply Computer Theatre (1998) as an introduction to his more technical work. In Pinhanez the term ‘computer theatre’ is not an oxymoron, especially as it is conceived as being analogous to ‘computer music’. Pinhanez states that there are four applications of the term, four general ways in which computers can function within a performance. The first is the ‘computerised stage’, or the use of computerised control devices, which we are familiar with as above.

The second is that of ‘electronic puppet’, where an operator controls a graphic representation of a character on stage. To a degree, this is already common technology in Rave culture where live graphics are added to the mix by an operator/artist (the VJ) in much the same way as the DJ uses a mixture of pre-recorded and live effects to alter and enhance the mood of the participants. At the other, more personal end of the scale, computer puppets have a solid base in Net Culture; the text only MUDs, MOOs and Chat practices now have ‘Avatars’, graphic representations selected or created by the computer used that interact using animation as well as text. Such systems, given the low-bandwidth of the Net, are necessarily reductionist, and possibly redundant (in it’s linguistic sense), with the main effect still being carried by the language and dialogue of the users[ There have been numerous studies of the MOO culture, but perhaps one of the most interesting documents is Roseanne Allequerqe Stones [ ], in which she follows the progress of the ‘rape in cyberspace’ incident of a few years back. The application of performance studies to low-bandwidth events (such as phone sex) tends to focus us back to the evocative power of language; it is arguable that a MOO without graphic representation is many times more powerful than that with.]. Using such a system in live performance has it’s problems, specially when you consider the rhythmic accuracy and subtlety required.

Thirdly, there is the ‘hyper-actor’, a prosthetically enhanced performer who can to expand their own movement and sound through real-time control of an information system. While this may seem initially alien, we have to examine our own practices as they stand – isn’t the training of a performer actually about such a process already? We can also see this idea of enhancement in the increasingly sophisticated application of prosthetics, taken to it’s extreme in the works of Stellarc and given a populist spin in films like Terminator and Crash. The training of the voice and body for maximum expressive potential as undertaken in nominally ‘old fashioned’ drama schools is already a kind of hyping, the process is merely extended across an ontological boundary to include what Donna Harraway would call ‘human and non-human factors’ (Harraway, 1997).

The performer may either use small piezo electric devices, similar to the receivers that sit under the keyboard of a synthesiser, to trigger events through a MIDI connection to a computer or sampler, or they may enter the field of a video camera connected to a computer which triggers events according to a movement size, speed and displacement (the field is known as ‘machine vision’). Again, systems like these have been developed for use in Rave culture, allowing the participants a degree of interaction, right from a mere relay of the visual into a enhanced video mix, or more critically, through to the ability to affect the music’s’ tempo or intensity[ I have had the opportunity to play in such a system, and it’s lots of fun. At the experimenta exhibition within [ ] ‘98, a system was set up by [ ]. You could advance through several screens, effect the tempo and intensity of music. The best thing about it was watching IT geeks trying it out. Some got it completely, you had to use your body, others failed this test dismally and backed off, muttering things about ‘toys’. ]. Experimentation with this sort of computer enhancement is much more common in the dance and performance culture than in theatre, and it’s fairly easy to see why: The script model of company theatre means that the rhythm and flow of the work are embedded in the dialogue and character, two things that do not mix well with multimedia aspects thrown on top. The way our culture is televisually engrained, the auditory loses out to the visual every time, and the nuances of dialogue are impaired (In my old life as a script-based actor I expanded the old adage to this; ‘Never act with children, dogs or multimedia’). However when your prime means of creation is choreography of the movement-image, expansions and interactions of visual and auditory material, as in film or dance, are just another part of the palette.

Finally in Pinhanez’s schema there is the altogether more complicated idea of a computer actor, “where the computer automatically controls a character, establishing a true interplay between man and machine.” As Pinhanez admits, this short sentence contains very lofty goals, but for us it provides a number of questions that bear examining: How does a computer ‘automatically’ control a character? What is a ‘true interplay between a man and a machine’? Are women excluded? It is perhaps critical, as we will later see, that Pinhanez uses the masculine form to describe the computer actor, but in the meantime this is a fascinating vision. If it were really to happen, if we watched a real-time, believable interaction between a computer controlled character and an actor in a performance environment, wouldn’t we be impressed? Curious? Maybe horrified?

Pinhanez notes that such an undertaking would have been, at a practical level, almost impossible a decade ago, but with recent advances in machine vision and processor speeds, the whole idea is worth pursuing. He has made several performances as part of his research, part of a small, but rapidly growing, number of research and creative experiments in this field where Theatre is the primary mode of exploration.

It is this last possibility, the computer actor, that we’re going to look at in some depth. In its conception we find many of the promises and bogies, not only of IT, but also of a networked culture in general. We are going to examine what it would take for this computer actor to function, where the various areas of research are now in relation to these requirements, and see what people are coming up with currently on the way towards this. Along the way we are going to get quite discursive, examining big ideas like Artificial Intelligence, Knowledge Representation, Interactive Storytelling, and Virtual Reality, but we’ll keep referring back to this construct of our computer actor interacting believably in (let’s keep it interesting) a themed improvisation with a live human counterpart. We’ll also see how various researchers use and conceptualise the discourse of theatre and drama. From there we’ll move on to question the desirability of such a construct.

First, to give our computer actor some kind of research standing, we’re going to give it an arbitrary acronym, and I think we’ll go for something simple like ACTOR. For the sake of completeness – though we should forget it immediately – here is what it stands for: Artificially Cognisant Trans Operational Resource[ Thanks to the excellent book Technobabble (Barry ) for a guide to putting together meaningless strings of jargon!]. Secondly we’re going to have to establish some general characteristics that ACTOR is going to have to have to give a decent account of themselves: Some sort of physical presence or character (embodiment). An active sharing of the stage and story with the live performer in the form of a sense of stage-craft and an active interaction (interface). The ability to deal with a basic story or dramatic sequence by recognising or creating objectives, conflicts, resolution (intelligence). And finally a sense of life and enjoyment, that which goes under the performance term complicite (spirit).

I think you’ll agree that this is quite a lot to expect from a machine, sometimes it’s quite a lot to expect from a performer as well, especially those ‘ineffables’ like complicite that we spend so much time looking for in the theatre, and which amount to that thing we call ‘talent’. However each of these needs for our ACTOR has some kind of related research happening, if only in it’s infant stages or in an indirect way. Lets have a look at these areas one by one.

Embodiment and Interface

Creating a real physical presence for our Actor is the easiest thing. Cinema regularly uses advanced animatronics to create engaging and convincing enough characters. From Jabba the Hutt to Babe, it’s a well researched and profitable field that rises from the application of computer controlled servo-electrical motors to conventional puppets to give subtle, and more importantly, exactly repeatable action. All that we need to do here is decide on a look and vocal timbre and then throw enormous amounts of money at it. The trick is to avoid human life-forms as a goal, since we are so used to the super-fine facial and vocal inflections of human interaction that it’s easy to spot a fake; a faithful ‘replicant’ from Bladerunner is a very very long way off. Here we can usefully substitute a cartoon-like character, or one that is a robot (as a character, not a form).

However the use of robotics is not a given. Pinhanez’s own experiment in Computer Theatre uses VR (Virtual Reality) to situate the audiences perception of a character into space. While VR has been used on a large scale in theatre by Robert Wilson in [that bloody play in London I can’t find the name of…] , the visual effects that were being offered were simple 3D objects and animations, not anything like a convincing character. While complex real-time 3D rendering is beginning to make it to the desktop, it is still in the ‘willing suspension of disbelief’ threshold and not approaching the threshold of reality perception (It’s only when there are so many colours at such and such intensity moving with x degree of fluidity then you have the perceptual threshold where the sense stimuli are equivalent to those involved in perceiving real reality, and full immersion can take place). Again, you’re better off using VR to animate a dog than a human, but doing it in VR will cost you a lot less than making a robot, and changes become shifts in information, not physical tweaking. There’s also the possibility with VR of morphing characters, or very extreme movements that a robot would be unable to make.

The current state of the art in VR is CAVE[ A footnote on acronyms: CAVE is a SRA (self-referencing acronym) in that it means ‘Cave Automated Virtual Enviroment’. Such trickiness is common in the computer world; for instance the ‘GNU’ public licence (a standard for releasing free software on systems like Linux) stands for ‘GNU’s Not Unix’ and is a TLA (Three Letter Acronym). I would recommend the ‘Hackers Dictionary’ in paperback or downloadable txt file to anyone interested in the arcane and delightful use of english in the computer world. Also ‘Technobabble’ (John A Barry, MIT press 1991), for more of an overview.], of which one was installed at ‘Ars Electronica 98′ at Linz, Austria. The most interesting work from our point of view was ‘The Thing Growing’ by Josephine Anstey and Dave Pape which was a ‘virtual Frankenstein experiment… an attempt to create a “thing” with a will and a personality of it’s own – one which confronts the user on an emotional level’ (Anstey and Pape 1998). While this seems to promise something of what we’re looking for in our ACTOR, there is also, even within the program notes to which I am referring, an admission that “The Thing” is not all that it could be – or at least that it won’t make our ideal; “The thing is heavy handed and willful – in part because of the exigencies of the story-line and it part to hide it’s own stupidity… It arbitrarily praises or abuses the user for the same behavior…. In part to hide it’s ignorance”. It seems that “The Thing” will fail our test on several grounds. It seems neither flexible enough in story-lining, nor sensitive enough to adjust to our presence in any meaningful way.

Part of the problem with VR is it’s inability to really achieve a subtle interface – or mechanism to interact with the ‘user’. One of the visions of VR is that of the disappearance of the interface, that the user has so little awareness of the control mechanism that they are able to operate it off pure instinct. This is where the ‘body suit’ comes in. The body suit is, supposedly, a reactive membrane that covers most if not all of the users body and can measure relative movement, tactile signals (such as used in lie detector tests) and also have ‘push’ in that the suit can impart impulses either from another user or the controlling information device. In this way, it has been envisioned, two or more participants may be a long way from each other, but be able to share an experience. But that’s just the beginning, VR was seen somehow to embrace all things (existence, art, psychology) in a single flowing experience:

“VR is shared and objectively present like the physical world, composable like a work of art, and as unlimited and harmless as a dream.” (VPL Research at Texpo ’89, in Rheingold, p.154.)

This is the earliest piece of marketing hype about VR, and writing about the subject ten years later, it’s still hard to see that it has an impact outside of specialised uses, although it desperately wants us to believe that it will solve some of our problems. This is initially a ‘image’ move, in that VR’s first incarnation was in the minds of SF authors and cyberpunks, not a great way to sell a product to the masses. VR was then given it’s utopian spin; at the crest of the hype (around ‘92) it really was all on for young and old:

‘I think this is the biggest thing since we landed on the Moon,’ says Jaron Lanier, the dreadlocked CEO of VPL Research. (Who was 9 years old in 1969). I don’t choke on that one. Indeed, I’d take it a bit farther, guessing that Columbus was probably the last person to behold so much useable and unclaimed real estate (or unreal estate) as these cybernauts have discovered.’ (John Perry Barlow, (1990) “Being in Nothingness,” p.37)

Just like the Wright brothers’ airplanes, today’s VR technology is crude compared to what can be envisioned, but it’s good enough for us to get off the ground and into the air for an exciting experience. (Pimentel & Texeira (1993) p.241)

And so on… The hype places VR in a dramatic position in many discourses; as an inevitable step of the ladder of computer evolution (from punch cards, screen, to mouse, to VR), a new trope in the American myth of the frontier, a cyberdelic alternative space loosely analogous to a trip (but without the addiction), a threat to network television, and a way to work from home.

Inevitably – and rather lamely – this has led to speculation about cybersex in VR, which I see as both a nostalgic drive to experience pre-HIV sex and a commodity desire to escape the actuality of this body. Again, the reality of this (as imagined in the film The Lawnmower Man[ Interestingly enough recent films that take on the idea of VR reverse the whole notion: In both Dark City and The Matrix, VR is seen as the thing to be escaped, it’s presention is exactly the sense of a world filled with Samsara that must be escaped (through love, of course…)]) is remote, and VR only really achieves in the areas of architectural, medical and scientific research, where the inhabitation of complex structures like atomic particles, spatially complicated buildings or the brain is aided by a degree of self-aware immersion.

Outside the visionary realms of VR hype, the interface between a machine and a person (who is typically defined, like a junkie, as the ‘user’), is a huge area of funding and research. Kicked off many years ago by the efficiency experts wanting to make the most of the industrial revolution (‘Time and Motion Studies’), the area of ‘Human Factors’ and more recently ‘Human Computer Interaction’ (HCI) deal specifically in how the transactions between a machine and a person take place. A quick glance around the field leaves us, again, with a sense that the reality of a interface that will suit the ACTOR is some time off. While HCI currently mostly deals with how we handle tasks on computers, our ACTOR would have to be cognisant in several ways that are more complex. The minimum a ACTOR would need is a form of vision so that they could react to gesture and navigate the stage, and hearing enough to interpret the human actors words. These two fields, called Machine vision and Speech Recognition respectively, are breakouts from the field of Artificial Intelligence.

Even taking the simpler of the two, aural comprehension, the difficulties are horrendous. The goal is ‘Natural Language Processing’, the state where the machine can function from input given in natural speech such as saying ‘save this file in my home directory’ instead of having to fuss around with mouse and dialog box. Of course the difficulties are fairly obvious, the computer also has to recognise that the phrases ‘make sure this document is safe and somewhere I can find it’ or even just ‘save this f___er’ are functionally equivalent to the first. As has been pointed out elsewhere (Barger, 1999) the paucity of the field is represented by the grammar checker in Microsoft Word; it’s quite capable of rendering a document illegible if you let it do everything that it wants. The spell checker is better, but it’s task is that much easier because it’s just a simple database application, cross matching alphabetically and watching for common mistakes rather than trying to make an informed decision about the meaning of a sentence. The complexity of the English language is mind-boggling in itself, and the possible combinations of word, grammar and semantic are simply too vast to be traversed by any modern computer. Speech recognition on the computer is certainly getting better, but the complexity level is just over that of a spell checker: All that’s happening is that the computer is checking off a highly modified database, trained extensively by the user, and making a few logical extrapolations about what words may follow what. Again, it doesn’t have to make sense of them, just get them up on the screen. However it is possible that, with lots of training, our ACTOR could build up an extensive database of the human actors words and even inflections, and make some kind of guess as to what is being said at a very basic level. There is a way to cheat, of course, and that’s to make sure the ACTOR recognises just one or two hundred words, and then the human performer uses these words to trigger responses or commence simple logical chains. But then that seems rather pitiful.

Machine vision is much harder again. As well as the ability to ‘make sense’ of gesture, the machine needs to be able to make informed decisions like separating the background from the foreground, deciding what is an important movement and what is incidental, deciding if the two components of gesture (spatial and temporal displacement) should be given equal weight. We looked briefly at a system that moved towards this above, the motion sensing Rave Maker. The level of it’s inference was way below this; it uses presence and motion over hot spots (cleverly placed outside the normal weighted range of a stationary body) to make it’s decisions, nothing like the full field of view is scanned or taken into account. I’m not running that machine down, unlike a lot of research into this area, it’s lots of fun and doesn’t kill people. De Landa (1994) describes PROWLER, a military use of the same technology. PROWLER is a small autonomous armed vehicle meant to patrol fixed perimeter fences or a battlefield. It uses machine vision to navigate and distinguish between friend and foe. De Landa speculates that the machine probably passes information across to a human observer, who then makes the decision to open fire, but I would say that this is only because the technology isn’t yet good enough: the goal of such a project can only be to develop a fully autonomous ‘killer robot’. you can imagine the ‘advantages’ of using such machines to patrol prisons and military bases, for instance. It almost goes without saying that killing people is much easier than communicating with them on stage!

HCI is a much bigger field than this though, it encompasses the study of input devices as well as the design of screen space and our paths through tasks that we perform with computers. There are common concepts that get used in HCI that should arouse our interest in our goal to make our ACTOR; action, agency and story. The primary source of the introduction of these concepts into public debate is ‘Computers As Theatre’ ( ) by Brenda Laurel. In this book Laurel draws extensively on Aristotle’s Poetics, using notions of Action, Agent, Story and so forth to push for a better understanding of the ‘narrative’ of using your computer. She maintains that there should be a shift from applications as collections of commands towards applications as managers of tasks. She also suggests that tasks need to have a classic narrative shape, that the opening, using and closing of a spread sheet would be more satisfying if the whole thing were framed as a narrative. She doesn’t mean to get the computer to tell a story, just that the whole should follow the traditional three act structure; possibility, production (complication), conclusion. And here’s a neat bridge to our next topic – how do you make a computer deal with concepts like ‘story’, ‘action’ and so on?

Artificial Intelligence

Artificial Intelligence (AI) is a fascinating, formidable and confronting field of study that has been one of the golden promises of the computer. Recall from above that Turing thought that his Turing Test would be passed by a computer by the end of the fifties. The fact that the field is now in disarray, or that earlier, ambitious goals have been successively broken down and scaled back, is interesting in itself. Even the Turing test itself has been scaled down, the current test that machines vie to pass is called the Loebner Prize, {expand a bit!} We will spend some time tracking this devolution from the grand project to smaller problems because it is in this ‘fall’ that we see, paradoxically, that ancient concepts like story and metaphor are coming to the fore as a way of understanding and manipulating knowledge.

It’s important to get this straight right from the start; AI is really about an attitude towards knowledge, and specifically, how we can represent it. Only when we have a ‘knowledge representation’ (KR) as some kind of object can we manipulate it. To keep things from becoming too technical (for myself as well as you!) I will largely be dealing in these Knowledge Representations rather than the hardware/software problems of making them work, though we will touch on them if need be. Instead of asking what software on what platform our ACTOR needs, we will be asking what kinds of Knowledge Representations are available for our ACTOR to draw on?

i. ‘Classical AI’

It has been suggested that Knowledge Representation as a self-aware activity starts with astrology, and there’s a good case for this; viewed as an information system, astrology is a complex information environment, and the art of astrology is interpreting that environment. Likewise the I Ching can be viewed similarly as, technically speaking, there are 64 precise permutations of a six-bit binary system (the Hexagrams), as well as the eight three-bit half-words they contain and the system of the moving lines that provides extra insight to each of the Hexagrams. The I Ching is also a repository of the dominant philosophical system of the time, and wisdom is always framed in terms of the ‘Great Man’ who is a diplomat of virtue. Again we can look at the Encyclopedia as an attempt at a knowledge system that rests on certain philosophical tenets of the Enlightenment, incidentally also a great example of a ‘clockwork’ system. Other KR systems come to mind – Leibnez’s Universal Calculus, Roget’s Thesaurus, Boole’s Symbolic Logic – each of which has a systemic urge predicated on a belief structure.

Looking at early AI in a similar way will help us get a handle on it. ‘Classical AI’ refers to the time before the seventies, when the whole field of AI was being established with the help of some massive grants from DAPRA[ DAPRA] and the setting up of various labs, most notably the AI lab at MIT under the guidance of Marvin Minsky. Classical AI concerned itself with a heavy KR burden: its goal was no less than building a synthetic mind that could deal with universal knowledge, general theories – even the theories of general theories – and generally just sit around and ‘solve problems’. Implicit in this project was the notion that this mind would solve problems through imitating our own mental processes, processes which had been firmly engrained by the last four-hundred years of sequential and mathematical method. At this stage of the game it was thought that symbolic representation was a relatively easy cognitive problem in that it was simply the retrieval and manipulation of symbols that amounted to reasoning. In retrospect this seems quite naïve, but we have the benefits of having seen the structuralist-modern project fall apart everywhere.

This is not to say that there were no successes in Classical AI, but the successes that occurred were in tightly bounded fields. The first notable success occurred in Minsky’s MIT lab by doctoral researcher Daniel Bobrow in 1964. Bobrow created a program, STUDENT, that could take a sentence, turn it into an equation, and solve it. For example, given the input “Mary is twice the age as old as Ann was when Mary was as old as Ann is now. If Mary is 24 years old, how old is Ann Q”, then STUDENT could figure out the answer – 18 – which is certainly better than I can do without boring myself silly on it. Given the complexities of Natural Language Processing as outlined above, the ability to infer equations from sentences is no mean achievement. In an article on STUDENT for ‘Scientific American’ in 1969, Minsky states that, beyond this ability, the real intelligence of STUDENT was to ask pertinent questions – either of the human operator, or a set of internal files containing such information as ‘People is the plural of person’ – that could resolve what certain semantic structures ‘meant’. It is from the same article that the quote at the top of this section comes

STUDENT is a case of the general class of ‘expert systems’. In Expert Systems, computers could apply localised knowledge to machinic processes (such as robotic car factories say) and control them over significant actions or periods of time. The system itself was made in close collaboration with human experts, whose knowledge was ‘captured’ into a series of rules. Then using sensors and guidance mechanisms, the Expert System could make changes to a the functioning of plant in real time. Early applications of the Expert Systems were so successful that, even as late as 1984 vast amounts of cash were being spent on Universal Expert Systems, computers that could handle any ‘expert’ problem at all, without the placing of artificial boundaries around what the task might be. The biggest research projects still in operation is ‘CYC’ ;

“Doug Lenat’s CYC (from enCYClopedia) Project at MCC in Austin, a ten-year $35 million project begun in 1984, uses logic, too, but emphasizes the Aristotle-problem, employing a dozen ‘ontological engineers’ to enumerate tens of millions of common-sense facts that will ultimately ‘add up’ to logical intelligence. Lenat’s goal is a system that can understand and speak ordinary language, and detect violations of common sense as readily as humans can. As of 1994, CYC’s sponsors were: Apple, Bellcore, DEC, the DOD, Interval, Kodak and Microsoft.” (Barger, 1998)

Obviously this is quite a big undertaking, and Lenat estimates that the number of rules required for CYC to function is between twenty to forty million – and extended the time frame for completion from 1994 to 2004, so we’ll soon see how far they got with that intense amount of rule making!

In the early days such a massive effort at accumulation of rules would have been thought ridiculous, surely there were some universal rules that could solve all cases? To me this is an indication that early AI is actually operating with a ‘motor’ paradigm. Surely there is an engine (an algorhitym) that could extract energy (a solution) from a reservoir (a set of problems and data)? It turns out that, without trying to put down every known local rule in the world (which is CYC’s project) that the motor paradigm just can’t do it. The discovery of this seemingly obvious fact signaled the beginning of the end for Classical AI, and is known as the ‘Frame problem’.

ii. From Classical to Behavioral AI

The Frame Problem puts a nasty dent in the clean ideas of AI practitioners. It’s official debut was in 1969 in a paper called ‘Some Philosophical Problems from the Standpoint of Artificial Intelligence’ by McCarthy and Hayes, and the basic idea – again centered around the concept of knowledge representation – is this:

“A common sense appreciation of the world includes a great many automatic constraints on Belief. For example, if a proposition p is true at one time, and an event e occurs, we just know whether e is relevant or not to whether p is true at a later time. We ignore beliefs that are obviously irrelevant to our goals and we keep track of salient changes… The frame problem is that of programming ‘frame axioms’ to reproduce this ability…” (Italics mine. Oxford Dictionary of Philosophy, 1996)

This ‘just knowing’ whether to update our beliefs and knowledge representations on the fly turns out to be hideously complex. In fact, over the period of the thirty years since this statement of the Frame Problem, there has a generalised understanding that ‘Classical AI’ is beyond us currently, and that it’s the simple problems such as knowing when to shift a ‘frame axiom’ that are much harder than the apparently more complex problems like how to run a car factory without human control. Minsky puts this nicely. “Much adult thinking is basically much simpler that what happens in a child’s ordinary play! It can be harder to be a novice than an expert! This is because, sometimes, what an expert needs to know can be quite simple – only it may be very hard to discover or learn in the first place”. (Minsky 1983).

In exactly the same year as The Frame Problem starts to put the brakes on the AI movement as a discipline, the public’s imagination is fired by the demon-archetype of AI, the computer HAL in Kubrik’s ‘2001′. It’s interesting that HAL has something of a childish attitude towards the humans in his care; through the film we start to see that his calming adult demeanor hides something more willful and emotional, where we sense that HAL’s directive to ensure the success of the mission takes over from the health of his adult charges, much as children get fanatically ruthless in their play.

Following the general scaling back of most of Classical AI’s goals, it started to find a new, more open and less scientifically ‘rigorous’ form, Behavioral AI. Where the goals in Classical AI are all inclusive, scientific, generalist and ‘deep’, the realisation that they might go unrealised for quite some time led researchers to break off smaller and smaller areas of the AI program to chew. As an example of quite how small some of these areas have become, the 1997 ‘American Association for Artificial Intelligence Robot Competition and Exhibition’ had competition for research groups (and very expensive one’s at that) in the following areas: ‘Find the Remote’, ‘Vacuum’ and ‘Hors d’oeuvres’. Despite the seemingly facile names of the competitions, these problems are considered the practical cutting edge of robotic implementation of AI. The ‘Vacuum’ for instance, not only had to do an initial clean of a house without scratching the furniture, but also had to keep an eye on people entering and leaving rooms, so it could check to see if the task needed repeating. You’ll probably agree, that while these things are important, it seems a long long way from HAL. However, in such a competition we can see the characteristics of Behavioral AI: The analogy has shifted from the human mind to simpler, more achievable life forms like the insect. The success of the projects are not being judged by objective measurement and the ability to solve generalised problems, but by their competence at a specific task in a specific environment. Here cognition is considered in it’s ‘shallow’ sense, what do I need to do in order to get this job done? This, of course, is also a way of working within a single frame, thereby making the frame problem itself inapplicable.

However, such attempts to vaporise The Frame still haven’t stopped people considering what it might mean for machines to be able to mimic human intelligence, or considering what kind of intelligence a machine already has, without requiring them to mimic the human. It turns out that machines can pretty much do anything that relies on a certain kind of thinking, what Minsky above called ‘adult thinking’. Adult thinking is a logical affair that we’ve all had to deal with at some stage of our life; within each logical frame, for instance, we quite happily deal with the rules of grammar, the clear abstractions of algebra and solutions similar to the ones that STUDENT was made to solve. These kinds of thinking can be summed up with the idea of ‘orthogonal’ thinking, which is the notion that complex events can be described in quite neat and precise terms by defining logical axes that various data or phenomenon can be ordered along. The Cartesian grid of x, y, z, is a classic example, but it’s easy to expand it into other uses by imagining successively more and more dimensions holding more and more information. Such information can be looked for, or searched, using a straight-forward hierarchical logic that sequentially compares data against the positional, or index, variable that it has been asked for. Such a system works very well in computers, and indeed in science and IT, and lies at the heart of how computers string together complex queries to databases, and get around the world wide web and the local file system on your computer. However, staying with Minsky again, AI is concerned not with just indexing information, but how to make it meaningful. Here’s his concise take on orthogonal thinking

“The secret of what something means lies in the way it connects to all the other things we know. The more links, the more a thing will mean to us. The joke comes when someone comes along and tries to look for the “real” meaning of anything. For, if something had just one meaning, that is, if it were only connected with just one thing, then it would scarcely “mean” anything at all!” (Minsky 1992)

Such a quote is particularly interesting in that the drive is very similar to poststructural thought (Deleuze in The Logic of Sense is a great example) yet he manages to avoid the ‘do I gave a navel or not’ gazing of extreme deconstructive practice, and move on to suggest that, for a computer to understand, it would have to deal with these webs of meaning (networks), rather than just in defined logic. The implication that linear logic is just one of the forms that a computer could use to compute things, and that the nature of a meaning web (redundancy, messiness, circularity) also actually gives us the ability to think at a basic level. This all tends to make sense at a basic level, and the wonder of it all is why we even started to try and create AI with these old systems of ‘thought about thought’. But that is precisely the benefit of the undertaking; now we know that new thinking about how we think is required, then the machinery will follow – as it already has. This is nothing startlingly new, such shifts have happened in just about every field of human knowledge (representation) that you can think of, from the rhizome of Deleuze’s poststructuralism to physics, the characteristics of the Network such as messiness, redundancy, circularity, chaotic behavior, complexity, are becoming the normal way we look at things.

iii. Believable Agents.

Believable Agents (or Synthetic Agents, Synthetic Characters) are the ideational development of Behavioral AI. Where Behavioral AI is ‘insectoid’, Believable Agents are more of the ‘animalistic’ order. The ‘agent’ idea is implicit in Aristotle’s’ Poetics, where it is a [].

To start thinking about Agents it might help to describe a few. The simplest is the ‘bot’ or ‘crawler’ of the Internet. These are relatively simple programs that autonomously search the web for specific information (the bot), or just catalogue everything they come across for later consumption (crawlers). Beyond these are more specialised agents that will give you the best price on an item, or give weighted ranks to information in response to queries about, say, how to heal a bee-sting. These later bots use Fuzzy Logic and feedback to look ‘left and right’ as they search for information, tuning their searches as they go. A very similar bot is that ghastly ‘office assistant’ in Microsoft’s Office ‘97 suite. The attempt to provide these, perhaps otherwise useful, tools with a personality is pitiful. Right now the animated paperclip is telling me this about itself; ‘Though nothing more than a thin metal wire, Clippit will help you find what you need and keep it all together’. Great.

However the Microsoft implementation of this technology is by no means where it ends. The use of agent technology can be skillfully and entertainingly done. The leading edge here seems to be in education and training programs. Here, again, is the step back from AI: If I’m learning about, say, accountancy, the last thing I need is an AI that will engage me in meaningful discussion about the origins of the universe because it got bored with Tax law. What is more useful in this context is an agent that could sense when I had gained mastery of a topic and move to the next, provided me with a simple means of keeping notes, and kept my attention on the job at hand. To go one step up the scale, wouldn’t it be useful if the agent could sense that I was getting tired, and pop up a game or suggest a coffee break?

One of the big differences is that Believable Agents are ‘future seeking’ and, and aren’t just reactive in the way that Behavioral AI is. They are interactive in that we don’t sit back and judge a Believable Agent’s competence to do a task, we do the task with it. This should make us sit up and notice a bit – aren’t these some of the things we wanted in our ACTOR? It should be even more interesting then to realise that the prime way that Believable Agent’s are programmed are around ideas of story, characters and more ‘artistic’ general principles.

Certainly the ‘future seeking’ aspect of Agents is essential in our ACTOR; there must be a way that a goal can be set and then executed. Technically, the way that Agents achieve this sense of looking is in the use of Fuzzy Logic, Pattern Recognition and Case Based Reasoning, all ways of dealing with searching in a ‘loose’ way. Fuzzy Logic relies on weighted set membership (grey can belong both to the set of white things and the set of black things, to a degree) and iterative feedback (rather than that painfully accumulated expert knowledge) to adjust its measures of what it might be looking for, and also what a ‘hit’ might be. Pattern Recognition attempts to look at the whole of a thing and see if it has the same or similar enough ‘shape’ as the search criteria. And Case Based Reasoning looks at earlier, successful, attempts at a task to find others that might be similar to the current one, then adjusts it’s current methods to mimic those that may have solved the earlier cases. All this cleverness with matching things in regard to a goal is good, but there are two things that the Agent doesn’t give us yet; a beginning and an end. Both the setting of the goal, and the final ‘this is it’ are judged by the user. In effect, the user is leaving the drudgery to a Agent, but the intelligence is still supplied by an external force. This is the case with the PROWLER robot mentioned above, and is not obviously a bad thing, but does limit the potential of our ACTOR. Indeed the examples of implementations of projects that are moving towards the ACTOR ‘ideal’ are very simple, on of the best being a basic master-slave status swap. While, to me, calling these Agents ‘Synthetic Characters’ is overkill, the rabid success of Tamagotchi and Aristotle’s dictum that character is revealed through action of the plot point me in another to another conclusion about Agents; in some cases, we are very easily hooked in that we have no problem whatsoever providing a middle and an end, or even a whole life history and storyline, if the Agent is sympathetic. This is what we’re going to turn our attention to next, and get out of the ‘mind’ trap – after all, doesn’t out ACTOR need to be able to deal with emotion?

v. Affective Computing

The Tamagotchi is an indication of how even trivial implementations of emotional factors into IT devices can make them a sensation. Tamagotchi have emotion and spirit only in the following ways; the spirit comes from the quality of the animation and the imagination of the user, the emotion from the building of reliance on the user through a simple button-driven command set (feed, play, empty the kitty litter). They are extremely simple, genuinely addictive toys that can give us some small hint of the potential for the role of emotion in IT.

However we need to go quite some way beyond Tamagotchi to see how far our ACTOR is getting towards feeling or displaying a level of emotion that we expect from performers. Affective Reasoning is trying something quite different, to ‘emotionally reason’. Sound absurd?

Pursuing two leads from above, it’s possible to see that AR is a kind of exploitable byproduct from the failure of AI and the ‘Frame Problem’. If we need a complex brew of emotion, discourse and logic to create a genuine dialogue, then should we be surprised if every so often a combination is hit on by a simple agent which evokes within us a complete response? And by complete here I mean in the sense that it triggers all of us at the same time; the affective, physical and logical constituents of our characters. I think that such a complete response is part of the aim of theatrical effort, part of what Aristotle gives us with the idea of identification and catharsis and so on. Looking further at all semiotically derived studies such as Barba’s (1991), I could say that part of their lucidity is precisely this kind of search for maps of affective triggers, and how they differ from culture to culture. Even with the above mentioned shift to a realist theatre in, at least, the first half of the Twentieth century, there are plenty of pulls in other, less-psychologically oriented directions. The work of Meyerhold, Brecht, Jarry, spring to mind as examples of theatre that doesn’t ‘extend’ from inner truth, but rather ‘intends’ from production to affect.

Such contradictions of method can also exist quite happily. A part of my training as an actor dealt with a systematic exploration of affective modes through the use of Laban methods of characterising human types (initially through movement), and then generating characters from those types. This way of working was never seen as being particularly antagonistic to a psychologically derived and centred approach. Performers will generally try stuff out, decide what works for them, and get on with it. An audience will often have a hard time telling if a performer is working in a psychologically extending or a physical intending way. I believe that this is because the performer changes modes many times in the course of a performance, sometimes working an action, sometimes following a kinesthetic vector, sometimes just ticking off the list of lines to get through while thinking about what someone in the theatre thinks about them (communication spiraling in a feedback loop).

My point is this; there is no implicit reason why an ‘external’, non-psychological model of consciousness or reality will not yield a satisfying affect. This seems obvious, but then to look back at this from the point of view of a ACTOR, there is no reason that an externally supplied set of responses will not provide an equivalent affect. In our terms of reference here, the Laban system is a rather simple ‘look up table’. That is, any movement or type of embodiment can be said to be assigned a place in a system, and then manipulating that information can then give back a archetype for an affective being. If you can’t believe that this could be effective, then you haven’t played a good computer game, as games rely on (very fast) manipulation of databases and lookup tables, laced with a good dose of fairly simple ‘AI’ and appropriate art. From there it’s a small conceptual step to say that believable agent for theatre can be created. Although they may not pass the Turing Test, it seems to me that, in fact, the Turing Test is not an appropriate test of an audiences ability to buy into a theatrical experience; that is, we are willing to engage in receiving the experience in the full knowledge that the performer is not who they are saying that they are; we accept the lies to get the ‘greater truth’ in theatre, just as we grant fairly basic computing devices like the Tamagotchi a ‘greater affect’ than they have in themselves.

Artiicial Life – Spirit, Genetics and Conditions of Being.

i. Joy de Vivre

In some ways, our ACTOR is a contemporary ‘Frankenstein’, and while I feel quite happy going through all these ideas, there are people who would consider everything I’ve been describing as abhorrent and heretical. The threat of the ACTOR is contained in more thoughts from Minsky, again in the pivotal time just before The Frame Problem knocked this sort of speculation on the head – at least as far as a serious goal for AI;

‘It is reasonable… to be sceptical about whether machines will ever be intelligent. It is unreasonable, however, to think that machines could become nearly as intelligent as we are and then stop, or to suppose that we will always be able to compete with them in wit or wisdom? Whether or not we could retain some sort of control of the machines, assuming that we wanted to, the nature of our aspirations would be changed utterly by the presence on earth of intellectually superior beings’. (Minsky 1969)

There is no real reason to suppose that, all my objections above not withstanding, that, given a long enough time frame, the ACTOR is not a merely a convenient fiction for taking us through the ideas related to IT, but a solid possibility. In the light of the above quote, there is no way to think that a ACTOR would be either necessarily benign and incapable of selfish behavior, nor an evolutionary form that was reliant on us for it’s continuing existence. Logically, with all the tools above developed to their nth degree, a ACTOR would be a challenge, but not impossible. What, however, of the last requirement of our ACTOR, that of a joy de vivre, a complicite? What is the point at which we say ‘It Lives’ rather than just ‘It is a virtual-reality embodied artificial agent with goal seeking behavior, speech and vision input technologies, and has the appearance of exhibiting emotive qualities and reason’?

This is a question that is raised in passing in the field of Artificial Life. Initially it might seem picky to try and separate out intelligence from life, but the two concepts both have their ‘artificial’ fields, of which we’ve examined Artificial Intelligence. However the two fields started in different ways and took wildly divergent paths to the point where AL was a joke in AI, and to AL practitioners AI seemed largely redundant. This attitude has changed somewhat over the last decade, with the two fields beginning to overlap to the point where AL is incorporated into AI quite routinely. For what we’re looking for currently, the start of our complicite, we should look at one thing that AI initially lacked and what life has, and again we’re going to come back to Minsky’s 1969 Scientific American article;

“No program today, however, can work any genuine change to it’s own structure… In order for a program to improve itself substantially it would have to have at least a rudimentary understanding of its own problem solving process and some ability to recognise an improvement when it found one. There is no inherent reason why this should be impossible for a machine. Given a model of it’s own workings, it could use it’s problem solving power to work on the problem of self-improvement.’ (Minsky 1969)

Minsky again puts his finger on the button here; what Intelligent systems really need is the ability to track changes in themselves, to develop self-awareness, to the point where they realise that they have made an ‘evolutionary’ leap. Machines need the ability to ‘boot-strap’, to pull themselves up a level of intelligence or understanding with only themselves as the means and measure. Such a requirement immediately puts us in ‘selfish-gene’ territory, pondering the larger questions of evolutionary drive. Evolutionary theory tends to agree with Minski’s insight – that these ‘boot-strapping’ moments are paradigm shifts where the mechanism keeps the result of a ‘fitness choice’ being made about more or less haphazardly arrived at instance of emergent behavior.

ii. The early beginnings of ‘The Game of Life’.

What Minsky wouldn’t have known at the time was that James Conway had made up a board game called ‘The Game of Life’ and had been recruiting lots of friends and colleagues into marathon sessions where the first moves were being made in the field that would become broadly recognisable as ‘Artificial Life’ some twenty years later. The game of ‘Life’ is very simple; a very large grid, some tokens, and a few simple rules. The rules are these: {If a token was left alone on the grid for a length of time, it dies of ‘loneliness’, if it was joined on an adjacent square by one other it bred a third in an adjacent square, if surrounded by more than 3 tokens, the central token died of overcrowding.} This simple and elegant framework provided some interesting results: after a period of time, certain patterns were seen to emerge {examples} , and after much exploration it was proved that an infinitely large version of the same game could be capable of computation, was, in fact, a Universal Turing Machine as described above.

With the increasing use of computers the game transferred from a hugely laborious task, to a much faster and larger version played out on in computational space. These further tests confirmed Conway’s results; patterns did emerge, and taken altogether a board of infinite size was analogous to a Universal Turing machine, and so could simulate any computational activity of a computer. The phenomenon was called CA, or Cellular Autonoma. Why is it important that a game run on a computer could simulate a computer? The Turing Machine ran off a set of instructions that, while minimal, are more complex than those of the game of ‘Life’. So. In effect, Life displayed emergent behavior to the effect that it could, from a simpler set of rules, imitate a more complex machine. While this doesn’t seem like much of a big deal, the idea of it was tantalising enough that more action followed.

Soon researchers began making more complicated rules and changing the tokens from simple two-bit, on/off, entities to more complex strings of information. The results were startling: given enough time, it was discovered that automatons ‘made’ of information, could, in fact, with a few evolutionary tricks thrown in, evolve to a state where they could learn to successfully navigate complex preset ‘paths’ by being trained up on simpler ones. From here things quickly developed to how they stand today: these independent bit-codes can actually solve mathematical problems unaided, and then move onto the next ‘food’. An example of the general acceptence that this field can be really useful is that a degree of the Windows operating system has been ‘bred’ rather than written. The whole field now goes under various rubrics, depending, I think, on the funding structure of the institution that you’re working in and where the idea first came from: while I have started the account of Artificial Life with Conway, there were others who were developing similar ideas at the same time or earlier. The whole field has become known as Evolutionary Algorithms (EA) and refers to any informational method that uses processes that are analogous to Evolution. The various processes which this general rubric refers to include Genetic Algorithms (GA’s, as above), Evolutionary Programming (EP), Evolutionary Strategies (ES), Classifier Systems (CS) and Genetic Programming (GP). It is not the stated goal of some of these methods to actually create Artificial Life, but there is always the use of some of the methods of evolution. It’s quite curious to realise that, while these strategies are programmed on and exist inside the computer, it’s Darwins ‘Evolution of the Species’ and not Descartes or even Einstein or Weiner that makes sense of this field of information technology.

The current high end of CA research is in Japan where the CAMBrian Project has made a CA based Artificial Intelligence with 10 million artificial neurons, and is hoping for a billion by 2001. The Brains inventor, Hugo De Garis, with no qualification states; “One Century from now I see humanity being replaced by massively intelligent machines.” Appropriately, given the affective success of the Tamagotchi, the commercial application is in Artificial Pets, the first one being called Robokoneko, which will carry a radio link to the 80mb artificial brain. A sideline of this project is the production of evolvable hardware to complement the Artificial Brain (PC@uthority, 1999).

iii. The Information Imperative & Evolutionary Computing.

Strangely then, we end up talking about genetics as a prime motivator for IT. At a general level it kind of makes sense; after all the ‘post-capitalist’ market that IT thrives on certainly likes to characterise itself on concepts that we might recognise from Evolutionary theories, like ‘Survival of the Fittest’ and ‘Niche Markets’ (while managing to avoid evolutionary strategies that have an altruistic basis like kin selection[ Kin Selection is]). It’s worth recapping our High School biology here to remember what Evolution is all about:

“Firstly we should note that EVOLUTION (in nature or anywhere else) is not a purposive or directed process. That is, there is no evidence to support that the goal of Evolution is to produce Mankind (sic). Indeed, the processes of nature seem to boil down to haphazard GENERATION of biologically diverse organisms. Some of evolution is determined by natural SELECTION or different INDIVIDUALS competing for resources in the ENVIRONMENT. Some are better than others. Those that are better are more likely to survive and propagate their genetic material.” (‘The Hitch-Hikers Guide to Evolutionary Computation’, 1999

We should note further that the methods of generation include the processing both of mutation (random effects) and recombination (where the parents chromosomes do a swap in the offspring)

Specifically talking about IT and computing in terms of evolution seems counter to logic, and this is because it is: the type of computing that we have when we start to talk about evolutionary systems as opposed to sequential computation are analogous to the kinds of shift that I have talked about from the motor to the Network, which is to say a shift from strong linear logic to networked and loose types of reasoning.

So we have to make a distinction between types and uses of computers to see how Evolutionary Computing is different from Sequential Computing, and the easiest way to do this is to go back to the development of the first digital computer ENIAC. It turns out that the makers of ENIAC simply modeled their computers processes on the methods that the human computer was using at the time. They literally walked into the computing room where the women sat with their mechanical calculators, took their workflow and algebra, and translated it into a machine code. This is fairly understandable; it was not their goal to reinvent the wheel, rather to make it go faster. And that is, in essence, the history of the computer up until about 1970 – and still the goal of companies like Intel. It’s simply this; we have the algorithm, and any increase in speed will come from implementing and optimising that code (a.k.a. Programming). That is, we have created the computer as an image of our own intelligence and belief in the (clockwork/motor) paradigms that we have had since the enlightenment. That is also to say that a computer being used in the way that I’m using it now, off-line, to type with, is more a motor device speeded up than a Network. Of course there is the argument that writing by outline and cut-copy-paste writing is it’s own system of mental organisation, but neither of these characteristics are exclusively the domain of the computer; the first is Descartes through and through, the second is as old as Dada.

On the other hand, this same computer is remarkably good at Evolutionary programming, even though it is not really built for the task. The provision of another set of operating systems and programs would make my desktop computer at least as powerful as those that serious labs had access to though the eighties[ Or, more simply, just run a ‘Virtual Machine’ inside this one, as with using a Java Applet to view examples of Evolutionary Algorithms on the Net. {}]. However there are machines in existence which really embody the evolutionary paradigm properly, and not just as an after-thought, as with mine. The first such machine was built in the 1970’s as a test-bed to run Artificial Life scenarios. The ‘Connection Machine’ developed by Thinking Machines used 64,000 {check} very small processors, but they were wired together so that they could work in parallel, sharing information with ‘near’ or ‘far’ processors with ease. At the other end of the scale, the Cray Supercomputer was the fastest sequential machine at the time; it seems that on parallel problems (like AL) the Connection Machine won hands down, and vice versa on sequential problems.

The point is that the Clockwork/Motor Paradigm has a half-life, and because we’re at the very very beginning stages of exploring what the newer paradigm of the Network does and means, many kinds of paradigms exist at once (if we were to form some sort of way of looking at them pictorially, they may resemble fractals, complex interactions taking place between many attractors in the idea-scene). It also follows that there should be a kind of math that underlies (or powers) the new paradigm. If the mathematics of pre-enlightenment were those of geometry (place) and those of the clock & motor paradigms calculus (pace), then the mathematical driving force of this evolutionary way of looking at things can be called ‘parallel intermath’ and relates to the quality of ‘pattern’ (Bailey 1996). The math that runs all of the above ideas, from neural nets to evolutionary computing is a type of math that is temporal and parallel and multivalent; that is they take into account the history of a given bit of information, process many calculations at once and can look at the resulting data set from any (or many) points of view at once. This makes parallel intermath scary, because it is in one sense profoundly anti-humanistic; we can no longer do the math ourselves, we’re just not fast enough, so we have to hand it over to the machine. Because it is analogous to evolution, there is also the possibility (as in Minsky above) that is will get out of hand. Maybe, also, there is a jealousy when the machine we have created exceed out own abilities…

And the computer beats us hands down in many ways already, and has done since the day ENIAC was switched on. While the brain as an entity is unbelievably complex, it is also ridiculously weak in other ways. The basic processes of logic and math are intensely difficult for a human mind: if we look back a mere sixty years or so, at the task that the first electronic computer was created for, calculating trajectories and ranges for the artillery, the human computer (and that’s a real job description) took one working day to figure the range of one shot. And it wasn’t easy; apart from just keeping your mind on the task, you had to be meticulous, logical and thorough. I would say that these are not necessarily things that a mind is good at, or at least for, though I may be showing my own preferences here! Bailey writes about this new math as ‘ the opening of Part Three of the Book of Nature’. He also adds some caveats that reinforce the idea that this Third Part is anti-humanistic, “With emergent computations… there is no direct way for the human mind to understand where the answer came from, because the formulation itself is constantly changing on the way” (Bailey 1996). This general idea has been a central factor in the study of evolution since the field began, and cohesive theories of evolution must deal with this idea of constant change, where not only the thing, but the thing’s tools, awareness, and fitness also change all at the same time. In computing terms this means that the program and the data (and probably the hardware) should evolve together.

We should also see that the new paradigms will have a techno-scientific cutting edge; somewhere there are, in effect, mad scientists putting new things together with the new tools. The epitome of these ideas of genetic emergence as a mode of information have given us the Genome Project, the institution who hopes to map the human Genome. The scary thing about that is that DNA is easily synthesised, and once the genome is mapped into pure information, then recombination of human genetic material is possible. While the goals of this kind of project may be about knowledge and medicine, the prospects of abuse are extreme. We are reaching the point where human genetic manipulation becomes just another information problem; there is no real conceptual difference in informational terms between manipulating bits and making artificial life, to manipulating chromosomes in real life. Such statements have the edge of conjecture, but already the study of genetics and biology are the new golden stories of science, the new area where the techno-scientific complex is getting dirty, sponsoring textbooks and technology programs that have a tendency to take the desirability of genetic manipulation as a given (Harraway, 1997).

So we have discovered that there are several terms and ways of thinking that are wrapped up in each other: The ‘Network’ turns out to rely on ‘parallel intermath’ which in itself is a brand of representing knowledge in a loosely ‘genetic’ way. The common thread is information, and this tangled and complex inter-relation of conceptual, actual and virtual we might call the ‘information imperative’, the force of the perception that, just as everything can be expressed as information, that it must be to gain cultural validation. This is completely similar to the enlightenment, where valid scientific proofs were expressed as formulae because that was the expectation, not the necessity[ Bailey gives the example of Kepler, who waded in data to find his theories versus a Newton who would never condescend to such a thing, preferring the view that data was more correct if it tallied with the equation! ]. Soon pretty much everything can be expressed as formulae made with correct (linear) method, all illustrated with the image of Leibniz trying to solve the worlds’ problems by saying ‘Let us Calculate!’. It’s not difficult to imagine an equally gifted scientist, when confronted by a worldwide crisis, shouting “Let us Evolve!”.

Lets take a bit of a breather from the geek-world here and start to think about how all this relates back to our base topic of theatre: It turns out that these ideas (Networks, an Evolutionary metaphor) are already familiar and have been more or less accepted within the arts community for a number of decades. For instance, one of the key thrusts of abstract expressionism is that the product emerges from the process. As in a Pollock, the final picture is not clear until it arrives, and the process of trying to figure out either a meaning or an origin is just money in the bank to the critics and dealers. This idea has it’s corollary in many arts, and we can come up with examples easily; the cut-ups of Burrough’s are generative and iterative processes that arrive at form, while the improvisation movement is a performance parallel: The impro is seeded then followed, the following becomes the process and product simultaneously, all completely inseparable and unique, a kind of performance-genetic magic. Again, you can say, ‘that was great moment’, but you can’t necessarily track it down and isolate it’s properties or causes. This general approach or attitude is often simply called ‘organic’, and in that word you have it again, the information (genetic) imperative.

Along with the rapid advance and integration into our lives of technology, AL raises questions of a more philosophical nature: What are the conditions of life? Given that these EA’s could self replicate, evolve and ‘boot strap’, why couldn’t we call them living? Here the central question becomes the ‘unplugability’ property of EA; can it said to be living if, when you pull the plug on the machine, they cease to be? The obvious answer of ‘no’ has it’s problems though – what if we ask the same question of ourselves? Can we be said to be living when we have out own unplugability problem? First, as individuals we have a body that is unarguably temporally constrained, and then as a species we have the Gaia system, the ‘space ship earth’, which could constitute our own synthetic environment. Our lives definitely cease to be if either body or planet is ‘unplugged’. So maybe we have to say that there is a case for calling EA’s ‘alive’, and if we do so then we get the idea of a life ‘in silico’, synonymous with life ‘in vitro’. This sort of thinking starts to upset our anthropocentrism, and it has been suggested that we, the carbon based life-forms, are merely the evolutionary platform of silicon based forms. Is it then, given such ideas, unreasonable to ask if there is a notion of a purely silicon theatre? Would we recognise it if we saw it?

iv. The ACTOR: a scorecard.

By way of a recap of all these technologies before we move on, lets look at one way our ACTOR might drag itself up the evolutionary (and educational) ladder. We’ll presuppose a successful research project some years in the future, and the brief form of it’s report might go something like this:

First, ACTOR was able to understand everything that the human actor was saying, and able to follow the actors movements (F for passivity and selfishness). Secondly, we hooked ACTOR to ‘NEURON’, an advanced data base where the information is held in networks of meaning, not in categories (F for ungrounded abstraction). Third, the input from the speech and motion recognition devices were hooked into the NEURON data base where it was assembled into complex inference patterns (F for sloppy reasoning). Fourth, we introduced internally bred Evolutionary Algorithms into the NEURON, they sifted and reconfigured information in ways that were then tested for fitness on stage, the whole process continuing over a period of months until it made enough independent bootstrapping decisions that it was able to make excellent guesses about what the human performer was doing, or going to do next (E, needs to try harder). Fifthly, the above mechanisms were connected to an output device, a live 3D environment where the ACTOR controls both the representation of the environment and also a robotic representation of themselves for the human actor (B for effort – shows promise and a desire to learn). After further tuning with feedback from both it’s own systems and the human actor over a range of problems, a believable but crude degree of emotive expression was gained (B, needs to curb disruptive behavior in class). Further affective mechanisms (both input and output) were hooked up to a live audience, allowing a fine tuning of its’ actions towards emotional ends – it could now recognise lapses and shifts in an audiences reception (A-, could excel with application). We then ran ACTOR continually over a year with many oter human actors and audiences so ACTOR could experience and learn which actions and overall story shapes give the audience pleasure (A, shows strong aptitude for speech and drama), and what sort of temporal restraints are needed to work with that audience (A+, we are delighted with new-found sensitivity). Finally we debuted ACTOR at an improvisational workshop held in Iceland, to both academic and critical acclaim.

And so the ACTOR becomes a reality. Maybe it even seems hard to imagine a time when it could have been otherwise. This gives us an interesting picture; there’s our human and our non-human performer on stage, and the ACTOR is making the human look decidedly not-flash. Not only is the human actor slow, selfish in trying to hog the stage, hopelessly lost in a complex story, but the audience is wishing that they’d just get off and let the ACTOR finish up on their own thanks; after all, they’ve paid good money to be entertained and that’s what the ACTOR is giving them in spades. Finally realising their plight, the human actor makes an arbitrary and hurried exit, leaving the stage to the ACTOR, who, with perfect timing, rounds off the scene admirably and takes it’s bows before repeating the whole event as many times as you want, as long as someone keeps it switched on… or it becomes HAL and takes over the power supply and (insert generic Manga plot here)…

Part Two: ACTOR beware

The ACTOR as Problematic Playmate

We will spend some time now looking at the problematics of a ACTOR (often retracing the same ground as above, but with a different emphasis), because it will lead us on to looking at our main topic; how are theatre and information related, and what can we do with that relation? Often some of the problems and their antidotes will be antagonistic, and this is as it should be; the ‘antidotes’ given later are merely a set of possible attitudes that open out the discussion, rather than directly reply to the set of diverse problematics that I’m about to outline…

i. The ACTOR is a performer too…

Thankfully, it doesn’t all end with the A+ grades and sell-out seasons on the West End. The ACTOR is not the nadir of theatrical creation. It may be a triumph of IT when it happens, but – to be incredibly obvious – there is more happening in Theatre than what occurs in the short temporal zone of the performance. The performance is the active product of a complete engagement in personal, social and theatrical aspects of life. That’s one problem with all the researchers above, with the possible exception of The Oz Project; there is a general belief that it’s good enough to solve the ACTOR problem, and reap the many benefits; financial, academic and otherwise. They fail to make any effort towards this understanding; the ACTOR is a performer, just like the human Actor, and so is a part of the larger process of performance and all the forces that act upon that. The ACTOR has all the cultural, philosophic and artistic problems of any being apprehensible by the label ‘performer’. And hopefully you will have noticed that, by moving along the path of seeing where our ACTOR might come from, we have been making some massive assumptions about theatre, drama and the notion of personality and intelligence which, for the most part, have been shared by the researchers that we have been visiting. We have been assuming that the ACTOR should be broadly analogous with a person and behave in humanistic ways to be seen to be valid. We have been taking drama as a Aristotelian construct. We have been dealing in psychological models of dramatic representation. We have, broadly speaking, been looking at a ACTOR as an entity that would function well in a fairly conservative, commercially oriented, linear theatre. In short, we’re talking about old stuffy theatre, and not the type of theatre that is even slightly interesting outside of a historical frame.

ii. Problem One: Brenda Laurel and the Aristotelian Construct (Among Others).

As mentioned above, in ‘Computers as Theatre’ Laurel draws extensively on Aristotle’s ‘Poetics’, using notions of Action, Tragedy and so forth to push for a better understanding of the narrative of using your computer. She maintains that tasks need to have a classic narrative shape, that the opening, using and closing of a spreadsheet would be more satisfying if the whole thing were framed as a narrative. She doesn’t mean to propose something like the Microsoft Office Assistant that I mentioned above, just that the whole should follow the traditional three act structure; possibility, production, conclusion.

While this sounds promising, and is a great read if you’re into it, there are some serious flaws in this kind of thinking, both in the idea of theatre it holds, and the vision of what a computer could be. Laurel’s computer is a distinctly productive, end-gaming, logical thing, and her notion of theatre is severely historically bound. In her use of both fields she limits herself to what I have characterised above as ‘motor’ concepts[ To be fair, Laurel may have expanded her ideas somewhat. Her recent projects include a walk through performance in Canada, and an award-winning multi media product for children – the hardest audience?]. This is not a bad thing in itself when the type of computer that you’re applying your knowledge to responds to motor organisational paradigms, but as we’ve seen, this is not the only kind of computing available to us, and may turn out to be the weaker kind. Her use of theatre, while probably deliberately reductive, is endemic of a view held by most researchers working in this area, a view that is severely out of date and naïve. We don’t even need to go further than ‘The Poetics’ to start corroding this motor view of theatreinformation:

“So the plot is the source and (as it were) the soul of tragedy; character is second. (It is much the same in the case of painting: if someone were to apply exquisitely beautiful colours at random he would give less pleasure than if he had outlined an image in black and white.)”

We would not necessarily agree with this statement. Pollock, Burroughs, Wilson, all to a degree do exactly that which Aristotle would find offensive. That is, they evolve (‘at random’) works of a evolutionary nature. Of course it’s not Aristotle’s fault that our conceptions are now nominally broader, and in some ways his ideas of things are richer than our own, particularly in reference to rhetoric. However, if we were to wholeheartedly agree with him on this point, without the possibility that there could be something else going on, we would be closing down to much of what is rich in the arts as a general pursuit. In fact, we would be locking off the possibility of culture ever renewing itself through any kind of ‘random’ (read Evolutionary) forces. To frame this a little more poetically, we would have to deny theatre one of it’s best qualities, we would have to deny theatre’s imaginations, and strike this phrase out; ‘The theatre is to society what the dream is to the individual’[ I have no idea where I read that…]. Likewise, a view of the computer solely as a productive (not random) machine limits us as well, as I have tried to argue above. At a broader theoretical / political level, Laurel is binding things up with a capitalist means of production that is deeply problematic in both Theatrical and Informational realms. In the Theatre we need only look as far as Augusto Boal’s Theatre of the Oppressed (Boal, 1974) to note the problems that an Aristotelian drama has for minority groups, and Ted Nelson’s vision, as outlined in Computer Lib and Literary Machines, provides a similarly refreshing take on what the computer (even in it’s motor mode) might achieve. It’s very easy for political majority forces to ignore or displace ‘deviant’ ideas, and while, in one sense, research programs necessarily have a degree of deviance built in, there are immense pressures to follow the party line. The party line in theatre is (still, yawn) the Motor paradigm first articulated by Stanislavsky, and in Information it’s (still, yawn) the motor paradigm first articulated by Von Neumann (the computer as sequential processor). That both these lines are encouraged because they are close fits to the (outdated by still prominent) capitalist modes of a commodity culture shouldn’t escape our notice, and neither should the dominant cultures urge to assimilate and erase the value of divergent groups, viz a viz the eradication of alternative culture over the last ten years, where the commercial value of the market is acknowledged, but the ideas are just discarded if they are not comfortable.

This general kind of ‘forgetting the last fifty years of difficulty’ attitude towards culture is not solely the domain of theatre and IT of course, more or less the same thing happens in Hollywood with it’s tight insistence on Stories and Psychologies and, in Joseph Campbell’s over-influential books, ‘Myth’. It could also be argued that the same sort of forgetfulness has happened in culture in general, this quick turn we’ve made back to ‘authentic’ (aka motor) modes. Taken in a broader sense, this tendency to be scared of the new is deeply engrained in our culture and ourselves, as a cursory glance at the reception of the idea of Evolution itself can attest. Many members of society did, and will still do, just about anything to believe that Evolution either excludes them, or, more subtly, that we, Homo Sapiens, are the miraculous end result of a gradual, innately ‘right’, process

“Now clearly, all those four views – progressionism, gradualism, determinism and adaptionism – go together to form a set of comforts and hopes, to form a world that we would like to accept, one indeed in which we would feel quite comfortable… (Gould 1983)

Such anthropocentric attitudes of hope and comfort are, at best, an opiate. And in the repertoire of opiates we have to include precisely the kind of traditional dramaturgy that still rules most of our theatre; is it enough to say that this kind of dramaturgy is good because it works, because it enshrines ancient values of story and transmission of knowledge? I would argue that, yes, traditional dramaturgy (sometimes) works because it strikes the right series of buttons to elicit an affective response, because it reinforces what we would like to think about ourselves and our place on this earth, regardless of the evidence to the contrary. That doesn’t make it an absolute law of theatre, it makes it a habit. As I will argue below, habit is not a negative, but we really have to take off the blinkers and deal with the idea that ideas are not, like mineral resources, lying beneath the earth’s surface, just waiting to be discovered and described, they are human constructs, and liable to change:

“…there are no direct answers in nature to our hopes and moral dilemmas. I think that’s a job for human intellect to construct for itself. That’s the job of humanistic scholars; it’s a job for all of us as human beings, not the job of scientists to find in nature.” (Gould, 1983)

There is, then, a human project which we are all engaged in, the construction of our own reason for being. This is what Aristotle was doing, what Laurel was doing and what I’m doing, however there is no dictum that our efforts must be progressive and deterministic, that our ideas of a theatre must be built on the shoulders of Aristotle (or, for that matter Heiner Muller), our ideas for a theatre must include the current paradigm, or they will fail.

iii. Problem Two: The Virtual Reality Engine is another Male Self-suppressive

While Virtual reality had failed to deliver in many ways on the hype it generated, there are a couple of things that we need to pick up from VR that also echo problematics for our ACTOR. The first is that VR changes the perception of a computer from a tool to a ‘reality generator’. If we can view the computer as an ‘information prosthetic’, then in it’s application to VR it becomes a ‘reality prosthetic’; not just as a technology, but a mode of making much like Theatre itself. In the extract below, another, perhaps more insidious side to VR is raised, where the transcendental urge is taken to a new extreme:

“Virtual reality does not mean head-mounted scanners and data gloves to Kroker & Weinstein. In their terminology VR is a whole assemblage of experiences, involving a traditional class consciousness, the spread of the ideology of techno culture and the hegemony of ‘liberal fascism’ and its swing back into ‘retro fascism’ as the political force behind the so-called ‘Will to Virtuality’. ‘Data Trash’ seems the purest consummation of Marxism, the severance of the commodity form from its economic base, into the notion of the pure estheticization of experience. Arthur: “We talk about the recombinant commodity form, in an economy run by the biological logic of cloning, displacing and resequencing. Or virtualized exchange, the replacement of a consumer culture by the desire to simply disappear, from shopping to turning your body into a brand name sign.” (Arthur Kroker in conversation with Geert Lovink)

There’s a leader in the above extract to another point about VR. While it reaches out to a ‘pure estheticization of experience’, and trying to flee from the mess of the everyday, it actually relies on, and therefore complicates, the body, reinforcing its presence while at the same time trying to get away from it and climb into a transcendental experience . This vision of soaring weightless through datascapes, it’s a replay of the (masculine) desire to flee the body and live forever in the beautiful abstract. We’ve seen it through each mode: the belief that God is the perfect Clockmaker gives us beautiful abstract of a mechanistic universe where the power of making an equation makes him a kind of god. In the Motor mode, the depth understanding of the psyche makes the Psychoanalyst a god, while the motor itself gives everyman the ability to direct their own movie of speed (See in particular Virilio “Speed and Politics”). The irony is, of course, that the body always returns. No matter how much beautiful abstraction you can get, you still need to go to the toilet, you still make mistakes and crash the car; you still get the effects through the body, it’s displacement and propriaceptive abilities, something a performer understands implicitly.

The problematic for our ACTOR is that their embodiment-form will speak volumes about the body attitudes of the creators. Just as Frankenstein’s Monster was a deliberately crude amalgam of available parts, evoking both horror and pity in it’s audience, so the ACTOR will be constructed for effect. While this in itself isn’t particularly bad (we already have many such simpler animatronic creations) it raises an interesting question: Generally speaking, the selection and distribution of bodily attributes has been the provenance of (a now contested) Nature. Now we have to decide not merely what to augment with plastic surgery, but every detail of the embodied shape of our ACTORs. Going on the evidence of Cartooning, Anime, and Computer Game design, the future is not very bright; we’ll be cursed with ridiculously endowed female characters (Vampira, or Lara Croft from Tombraider), ridiculously endowed male characters (Superman, Quake et al) or fairly boring recombinations of humans and machines (‘The Borg’ from Star Trek, the ‘Evas’ from Evangelion). The rather obvious conclusion has to be reached that most of these creations are masculine fantasies.

To be fair, most of the embodiments (whether Cartoon or actual) of the projects that I have described above have had sensible leanings, and tend to be asexual. Once the marketing dollar steps in, however, you can be sure that most things sensible tend to get lost, and the fantasies will take over. Taking the general reaction to things like Supermodels, it’s fairly easy to see that an ACTOR, given a fantasy embodiment, could have similar affects on a general ‘I’ve only got this body’ public, increasing the alienation gap between what our bodies are and what we would like them to be. Extending this out a little, the ACTOR could create whole new levels to the meaning of Superstar. With Lara Croft from Tombraider, her huge public profile is sustained on several fronts by several different embodiments; firstly there is the game itself and the imagery, but then there is the Supermodel who takes up this imagery and continues it in the real world, even the suggestion that she become England’s ambassador for science. With the ACTOR, there need be no such lapse; the representation and the embodiment will be identical and potentially merciless in it’s efforts to get profile – isn’t the star who never sleeps and always looks the same the perfect media commodity? Also the strong possibility of literally being in film – where your image is digitally plastered over that of your favorite character in a movie – gives this all a whole new twist; everyone in the audience could be looking at a human actor and the ACTOR which is wearing the their ‘mask’. The permutations are endless…

From a masculine point of view nothing could be worse. The desire for the image, the abstraction of the female form, fetishisation of technology, it’s all potentially here in the most extreme form yet. Potentially, we’ll have to re-suffer the cliches from the early days of the net, where men were ‘liberated’ by discovering what it was like to be a woman. Yeah right. Already we have some men justifying their predatory behavior in Evolutionary terms (‘it’s natural for an older man to want to propagate in a healthy young female’), just as they comforted themselves in the Freudian death wish before that. The new paradigm can simply yield new ways to oppress others and yourself…

iv. Problem Three: The Virtual Class Politic and Recombinant Techno Culture.

A term that I’m going to hold onto from Kroker and Weinstein above is ‘recombinant’. We’ve already examined the use of the term in it’s literal Evolutionary sense, but it’s possible to use it in a more general way where ‘recombinant culture’ has to do with an attitude towards material of that culture. That is, everything is available to be spliced, sampled, cloned, transcribed and replicated; from images to sounds to genetic material, everything is available, via the medium of information, to be manipulated (appropriated if you like) by everything else. Before we get scared about that, we should remember what we did above for the concept of ‘emergence’ viz a viz ‘organic’, and again we find that we’ve actually been quite comfortable with the ideas and practices of recombinant culture for quite some time. Postmodern architecture is a good, obvious, example of a recombinant style; all that quotation and juxtaposition is not only the decorative means, but the process by which it was designed. Again, we can think of recombinant architecture as a use of Network principles and a decentered ideology; nothing is any further from the work in process than anything else. At more abstract level, the technologies of genetically altered food have the same attitude towards their gene pool. A gene is not measured by it’s container or category (human, pig, soybean) but by it’s properties (intelligence, yield, hardiness), which lessens the ethical drag on considering any gene as available to any other genetic chain. I’m not saying that that’s a good thing of course, but it helps illustrate the principle: the Network’s logic of construction is the recombinant form.

As with any paradigm implementation there are political issues of equity at stake. With the recombinant form Kroker and Weinstein argue that there is ‘The Virtual Class’. Very briefly, this group of people are those that advance the platform of the idea of the Virtual as the new machine metaphor as savior, much in the way that there used to be people who really believed that industrialisation would set a developing country free. They have the above described ‘Will to Virtuality’, a desire to escape this life at the fastest speed possible, equating the induced blur of effect with new experience[ There’s a good argument to be made here for the study of the blur and the drop shadow as defining characteristics of 1990’s design in the same way that colour and organic letter forms were the defining mark of the late sixties hippie culture.]. As I have argued elsewhere, this desire to live entirely on a vector of speed without any ‘folding’ is flatlining, a kind of technically induced death that lies at the heart of the vision of people like Bill Gates, who would love to live this perfectly straight life at warp speed, hoping their stocks do more or less the same thing (O’Leary, 1997). While Gates may be the only person on the planet who can both afford and command the technical savvy to live the Virtual Life, we can be sure there are many who are cueing up behind.

An interesting defining characteristic of the group is that there is no dress code. With the dissolving of alternative culture, The Virtual Class wannabe can be seen in many forms. The most obvious wannabes are often those who are shouting the loudest in the cool stakes; Magazines like the defunct Mondo 2000 and Wired are two such groups that teach us that dreadlocks do not make a Rastafarian. What is common to all is an absolute belief in the value of individual freedom to the point where it is not a right to be fought for, but it becomes a duty to see how far along that way you can go, where freedom jettisons any gesture towards responsibility or ethics . The strategies of such individuals and groups are those of networked late-capitalism; the accumulation of info-resources (Gates buying the rights to pictures in the Lourve say), the recombining of business practices and media (e-commerce, where shopping becomes Virtual entertainment), and a distinctly apolitical bent. It’s called a Virtual Class, but politics ceases to be relevant to it. Firstly because public politics in general is a cynical enterprise, secondly because they’re talking bigger than politics. Certainly the Windows operating system is bigger news than most governments, with the possible exception of the US who are fighting Microsoft to make sure that doesn’t happen. The Virtual Class has no concern with the world order (other than as a source to be strip mined for the markings of culture) because that is exactly what it is trying to evolve beyond. It has no need to put money back into anything other than itself, because it’s goals are often to eradicate the social mechanisms that are independent of it or antagonistic to it, like Unions, offering inducements instead in the form of Virtual payments like company stock[ is a case in point here, the perfect Virtual company which makes no money keeping its’ workers tied to it through highly volatile company stocks; along with call centres, the perfect Network equivalent of the 19th Century sweat shops.]. The goal of The Virtual Class is a society where media and entertainment are the strong dope that keep you giving them your cash and attention. It’s almost like that old thing where society is being pacified by TV, except that now you’ll be given the option of feeling involved (in your own pacification). The irony is that the mechanism of the network (which gives them the new logic of power) becomes their way of becoming the dictators that their beliefs in individual freedom tell them that they couldn’t possibly be. It’s this cloaking that makes a virtual monopoly product like the Windows operating system merely ‘ a recognition of the universal uptake of good design’ rather than a form of fascism. And so the Network becomes a tainted term in many ways, and maybe in years to come the term IT will be seen in much the same way as ‘The Enlightenment’, a double-edged sword at best.

Performers regularly grapple with the ideas of political implication and insertion into their society; often they perform elaborate internal justifications that allow them to, say, take an advertisement so that they continue to do co-op theatre; or take a soapie to pay the deposit on a house. In fact everyone does more or less the same thing in life, juggling their personal values with commitments, finance etc. Performers, however, are way ahead on this in that they have always lived in the contract world that the rest of society is becoming. Needless to say, the ACTOR may not have to worry about such things, may not even be given the choice. Like all Virtual technologies, the politics is programmed in; there will be no tantrums when asked to take off their clothes, no refusals to endorse the sponsors product – in fact the skill set of the ACTOR may include product placement as a positively reinforced goal. The ACTOR will be able to take a lot of Media time, the ACTOR could use that time how it’s creators see fit and the Computer Actor could become the Computer Seller without you even noticing the shift. You might think this is all a bit ridiculous, but this is a reality of the ontological differences between the human actor and the ACTOR: the human will know when and why they will perform for what money paid by whom in the service of whatever. The ACTOR will not make those value decisions, because the companies that will end up putting them together will not let them because the logic of the Virtual Class decrees that those human values are, in fact, obstacles to the greater act of saving humanity through technology.

vi. The Anti-ACTOR.

As a kind of summation and culmination of the potentials and problematics of the ACTOR, I’m going to splice some of the technologies that I’ve been isolating off back onto themselves using the techniques of the recombinant form, add a twist of theatrical (and televisual) practice and history, and see what comes out. I’m going to create, in the spirit of Frankenstein and from the basics of the ACTOR, a broader concept of a Computer Theatre. The results are more satire than straight prediction, but fairly obvious possibilities within a performance sphere given all of the technologies and concepts that we’ve come across so far:

1: The Reality Theatre Machine. The reality theatre machine has access to all telematic networks, and can switch to any form of camera in the world when a significant change in the visual field happens. Thus it can ascertain when a car loses control on a freeway, or someone is knifed on a train platform, and start recording immediately, passing the feed across to the TV network in real time. Thorough response buttons on their sets, viewers can fine-tune and personalise their program. These same viewers say that sport and real life really are the best theatre, and may occasionally wonder if the police can afford access to these programs…

2: The Stanislavski machine: A machine that scans a play, breaks it down into logical sets of problems, actions, through-lines. The machine then breeds artificial-life characters, and sets them running within the world of the play, arriving at a set of most likely character types and descriptions. This machine could be used in the automation of soap operas and TV series, generating story lines quickly, then matching the results against a database of affordable actors…

3: The Automatic Arnie. The Arnie machine is a one off. Using fuzzy logic, the Arnie machine inserts itself into the computer editing desk, makes inferences about how it’s host prototype would act in the given situation, and then by powerful 3D modeling, writes the image/audio signature of it’s master on to the print. There is no obvious drawback, audiences are unable to tell the difference, and so the Automated Arnie is able to make money well after the hosts death, accumulating into a bank account for the day his cryogenically frozen body catches up with the technology it used to portray…

4: The Keith Johnstone Machine. Groups of machines utilising Affective Reasoning involve themselves in non-stop improvisations for years at a time, unobserved in a studio somewhere, generating impro exercises for the worlds drama teachers at a prodigious rate, downloading the days breakthroughs via email (for a price)…

5: The Physical Theatre Machine. This machine recreates, through VR, dirty, dangerous performance spaces – without the liability problem – then gives you such a battering via the senses that you don’t remember what the show was about, just the impact it had on you; ‘unspeakable’…

  1. The Anthropological Theatre Machine. Through much scanning, inference and painstaking research, the Anthropological Theatre Machine makes scrupulous recreations of ancient theatre and rites using elaborate robots and a modified planetarium. To keep the audience from falling asleep, the Machine uses fuzzy logic to make on-the-fly, analogical statements about the current action, relating, for instance, the beginning of the Sumo match to the pre-fight antics of professional boxers, or the apparent reluctance of one subatomic particle to bond with another. In essence it is a knowledge base of all human activity…

  2. The Data Theatre Machine. The data theatre machine dramatises the flow of information and electronic commerce for infotainment purposes. Keeping a rigorous eye on the flow of all information pertaining to monetary flow (the universal medium of international communications), the Data Theatre Machine creates dramatic situations and characters not dissimilar to the ongoing cyclical nature of Melrose Place. Smaller multinational companies get bit-parts, while large players have their own story lines, created from their fortunes on the worlds stock markets. Imagine the excitement as the world watches Sony’s Kazuka man out bid Columbia’s Mountie for control of 20C Fox in a game of poker. Watch Russia’s poor but beautiful maiden tumble to her knees and beg for mercy from the IMF’s sober-faced bank manager while Uncle Sam sits beside his Mississippi of cash…

  3. The Meta Theatre Machine. Because all the machines above are networked, constantly sharing information with each other, there is a ‘hive mind’ effect, and the Network itself gains an identity. This machine possesses all knowledge of human actions, character, motivations, physicality, as well as being able to argue, reason or just bluff it’s way though complex human interactions, presenting multiple personas as needed, presenting the results as performance of truth, either on stage or screen. Because the identity of the Meta Theatre Machine is so complex and all-pervasive, it gives itself the nickname ‘God’, though it also possesses the reasoning power to assert that it may not exist…

So where does the Theatre stand in all this?

Somewhere above I’ve mentioned that Theatre is in a great place to analyse and critique some of the elements of our techno-recombinant culture. This may seem absurd given some of the scenarios that I’ve conjured, and given that theatre has none of the cultural standing or cold cash of some of the heady institutions that I’ve mentioned above, none of the late-capitalist logic of the Virtual Class, no multi-billion dollar backing of the world military machine. Still, Theatre has some unique characteristics that are worth our consideration before we all throw up our hands and give in to the seemingly inexorable push of the cinema multiplex and interactive high-definition television. First I’m going to even out the playing field a bit, then get a bit historical and positive about politics, finally ending up back with the Classics, looking at the very root of this word Technology and where it all started from…

i. Antidote One: History and The Reality Check

The first antidote is history. While history in IT circles seems to have a zero date of 1946 and the attention span of a goldfish, recent commentators have been casting a more skeptical eye over the ‘cyber-hype’ and focussing their attention back on some of the conceptual precursors of cyberspace. Darren Tofts, in his book, The Memory Trade – a prehistory of cyberspace (1998), makes a great case for the disciplines of literature, philosophy and the ‘ars memoria’ as being the conceptual structures that underlie our ideas of cyberspace, the conceptual engine of VR. While Tofts mentions theatres role in this prehistory, mostly in relation to Aristotle’s idea of imitation and mimesis, he (wisely) avoids at taking on the full challenge as there is something of a strange relation between theatre, VR and what he calls ‘cspace’

“The transformation of actual stage space into “‘real’ dramaturgical space” is a mysterious alchemy that theorists are still grappling with… As a consequence, there is a danger of approaching the virtual stage in the light of the same misapprehensions and unresolved conclusions about traditional elements of theatrical space. Rather than attempting, at this early stage in the development of cyberspace, to formulate a psychopathology of the virtual experience, it is more constructive to simply attempt to describe the structure of virtual representation.” (Tofts, 1998)

The ‘unresolved conclusions about traditional elements of theatrical space’ that Tofts talks of are, of course, exactly those mobile forces which make performance-space interesting, and it’s doubtful if any conclusions can or should be reached. It’s far more useful for theatrical practitioners to keep any misapprehensions firmly planted in the middle of the practice

A notably absent misapprehension in discussions of cyberapce that is crucial to theatrical practice is that of emotion. And by this I don’t just mean just the solo performer or audience member, but include the role of assembly in the whole experience, a collective aspect of perfromance. These forces form very complex patternings, a description of them would take serious study, and we would have to look at notions like proximics, crowd theory, the whole gamut of depth psychologies and their antithesis’, social and actor theories in behavioral psychology and so on. But it will be enough for us, following Tofts lead, to consider the ranges of virtual representations that occur in the theatrical adventure and for this we’ll have a look at what’s happening for an actor as they stand on stage and do their thing. In no particular order then:

So, even scratching the surface here, we’ve come up with five reality sets to keep in touch with. Some of them are purely physical others virtual. However even the ‘real’ are virtual if considered from their obverse, the ‘real’ of the role. Such apparent contradictions are (and always have been) the actors and orators domain: In the ars memoria, where the imaginary building – and then populating with concepts and facts – of a mental palace is then ‘walked through’ in the mind of the orator as they deliver a speech or recall some history, two realities are kept alive at the same time; the place that the talk in occurring actually in, and the place that the orator is inhabiting virtually, the palace of memory. In theatre the mix is more complicated again, adding, as we have, further emotive, technical, physical, energetic and imaginative dimensions

There is a principle at play here which relates back to our thinking on the Network, and that is parallelism, or simultaneity. The basic premise is borrowed from Fuzzy Logic; the glass can be both half-full AND half-empty because it belongs equally to the set of full-glasses and the set of empty-glasses (Kosko, 1994). This may seem trivial, but if we extend this logic we can arrive at tenable systems where nothing is ruled out, instead inclusion (transclusion?) reigns. Moving from an ‘either-or’ sequence to an ‘and’ Network is the key here, and, via the history of the ars memoria and the theatre, it’s possible to say that this principle is well established and deeply practiced, if not widely understood[ Deleuze has done a brilliant excavation of the idea of simultaneity in language: The Logic of Sense is, in part, an exposition of the kinds of joining and simultaneity in ‘nonsense’. He places a large emphasis on the difference between a portmanteau word who’s meaning is multiplied and complicated (‘nonsense’), and a conjunctive word where the meaning is merely additive. See also Deleuze Loius Wolfson, or, The Process in Essays Critical and Clinical.]. It is precisely the principle of simultaneity that gives theatre it’s worth: What is theatre if not an alternative reality that takes part in real? Framed in any way (an escape, a challenge, a mirror) part of the Theatres appeal is that it lies beside a culture, as well as being a part of it. The strength of theatrical practice in Poland under communism attests to this double nature; different enough to comment on, but similar enough to effect, a culture.

So far in this essay, I’ve talked about two kinds of reality engines: the theatre and virtual reality. We should note that there is a prior reality machine that we simply take for granted; the body and its sensorium. We could, just to follow our own little in-joke, give these acronyms; TR, VR and RR. Following on from this, and given the principle of the Network, it is possible to say that the three can be set spinning in their own system, each drawing insights from each other, crossing back and forward over any arbitrary boundaries we may set them. I would go so far to say that a comprehensive description of artificial realities could be made from such a set, though maybe we would have to include CR (Cinematic Reality) as there are a number of effects that are exclusive to it. What would be the point of such a description? Given that we’re talking about theatre’s role as antidote to some of the problematics of info-culture, then such an inclusive schema gives us a possible history, an extensive body of works, a relation between the real and the virtual that is critical and mature, and, most importantly, a reason to think that theatre might not only be interesting from the perspective of information, but vital to any project that wants to draw analogies and comparisons between any of out above realties.

ii. Antidote Two: The Positive Politics of Contested Realities and a Genetic Politic.


So maybe the growth of culture may be more evolutionary than we first think. We tend to mark cultural leaps in the arts with significant works or significant figures, with all the problems that come with establishing a canon of any kind. However it’s quite possible, and even very likely, that these individual works or artists are, in effect, emergent phenomenon. We are used to locating artists within their times and locations, but can we use a more ‘genetic’ approach?

To take just one example, the work of Len Lye, there would seem to be some kind of simultaneity happening in mind: On seeing one of his experimental films of the late twenties, a noted biologist remarked that the shapes that Lye was using were ‘portraits’ of the organisms he was finding in his work. Is it possible that both persons had reached the same image of a thing using different emergent methods

“That which belongs to the realm of the image is irreducible by reason and must remain within the image or be annihilated… In the exalted realm of images, illusion properly speaking, or material error, does not exist, much less the illusion of knowledge: but this is all the more reason why the meaning of a new knowledge can and must descend into the reality of life.” (Artaud, 19??)

We realise that the image that a society holds of itself is often presented back to itself in the arts and sciences (Artauds ‘realities of life’); but neither has a fast hold on the representative metaphors of a culture. The grand images of the Twentieth Century will include equally a portrait of Einstein in front of his equation and a “Le Demoiselles de Avignon” by Picasso. It’s a firm assumption of my argument that both fields move somewhat in step, or are at least so involved in each other that changes in one are almost instantly carried into the other. And I don’t think this is merely a 20th century communication age thing either, but certainly with the Network as a guiding principle (or convenient platform), it’s impossible to say that they don’t affect each other. Indeed, in much of the research that I’ve described above we can see that a somewhat truncated idea of theatre makes it into the research, but then again a hard-core information technologist would probably say that my understanding of their field was equally limited.

I think it’s possible that things go a bit further, and that in an area of equivalence (that is, in the realm of ideas or images) the two fields quite freely interbreed and take part in a fairly chaotic evolutionary mish mash. It should be noted that I think the sciences and arts (and in this case IT and theatre) understand the world just as much as they need to and no more; if there’s a new understanding that is so far outside the common understanding, then you are mad and people may take a years to catch up (viz a viz Van Gogh, Artaud, or in the information sciences Franklin or Kepler).

Todo list of topics for here

iii. Antidote Three: Techne

I’ve left this until last deliberately, though strictly speaking I should have gone to some effort to trace the root of technology right at the top. However it is such a beautiful and (relatively) simple concept that by using it now, after going through all these twists and turns of information theory and such, I hope we will find that it gives us a kind of grounding and a base from which to work. The word we use as ‘Technology’ (a logos of technology) was coined quite recently, sometime towards the end of nineteenth century. It has only recently gained the kind of meaning that we see it bordering in the last eighty years or so, since the Industrial Revolution, a meaning that connotes the mass production of artifacts that are needed to promote industrialisation. However it was not always so.

There’s a fairly obvious connection from ‘technology’ and ‘technical’ to the word ‘technique’ that gives something more of the feel for the original Greek word ‘teckne’, which is usually translated as ‘art’, ‘skill’ or ‘craft’. There are also similar meanings from similar roots; the Sanskrit taksan for ‘builder’ or taksati for ‘he forms’; the Latin words for making, texere for ‘to weave’ or tegere for ‘to cover’; the Indo-European stem is tekhn-, ‘wood-work’ or ‘carpentry’.

At first go this seems to make technology just a simple additive of ‘craft’ and ‘knowledge’ to make ‘know how for artisans’. However, the joining of techne and logos only happens rarely in the classical Greeks, specifically in Aristotle’s Rhetoric, where the meaning seems to be more about words that describe techne than a strict ‘logos of technology’ itself (Mitcham, 1994). Aristotle’s formal definition of techne as ‘a habit with a true logos concerned with making’ which we can unpack a bit to read ‘an ability to make that depends on correct awareness of, or reasoning about, the thing to be made’. It must be stressed here that logos in this context has little to do with our contemporary notion of logic as ‘The General Science of Inference’, and rest rather on notions of principle, reason and proportion, and our ability to apprehend such principles and forms (Bailey, 1994). In the ancient Greek world these notions seem to be much more flexible than our own where we bury the logos under the catch-cry of ‘logocentrism’ (We need to suspend our judgement for a while here, and look back to logos in an open way, much as we are now prepared to do with ideas of sexuality in antiquity). It’s this idea of a ‘true logos’, a ‘correct awareness of, or reasoning about making’ which is interesting here. With the current use of the word Technology, we’ve really narrowed down the sense of this, and technology is the practice of understanding the materials, the design and the process of production. With this older set of definitions the technical includes much more. Mitcham points out that technical products fall under Aristotle’s four causes in his ‘Physics’;

“According to this discussion… the “why” of a thing is answered only by grasping the “that out of which” it comes (material cause), its eidos or archetype (formal cause), the “what makes of what is made and what causes change of what is changed” (efficient cause), and the telos or “that for the sake of which” a thing is made (final cause). What is important in such discussions is that Aristotle does not limit the technical, as we might be tempted to do, to efficient causation. The making of artifacts involves all four causes.” (Mitcham 1994)

So a ‘true logos’ is more to do with a comprehensive understanding of the artifact, a deeply felt (felt in the way that a carpenter may apprehend the grain of a wood) understanding of the material, the form to which it is heading, the making itself and the uses of the artifact. We could scarcely say that the industrial process of manufacture has much to do with this depth of knowledge. It follows that techne has very little to do with mass production, such a thing being unknown at the time of course. It also follows from Aristotle’s formal definition (‘a habit with a true logos concerned with making’) that it is a known, teachable and repeatable practice, and that it is concerned with material, not pure theory.

The first point, that techne is part habitual, may not seem very exciting or at first glance to lead us anywhere, the concept of ‘habit’ is not very well thought of in our novelty-seeking culture. However, to turn back to the logics of parallel intermath, Bailey points out that the notions of emergence and parallelism can be found before the advent of the industrial revolution in the accretion of knowledge by artisans, specifically wheelwrights of England. No one artisan could say where each piece of their knowledge came from, or indeed why various things about their practice were as they were. They were, in the narrow sense, ‘habitual’, but in a more inclusive and sophisticated sense we could say that each artisan was a node in a complex inter-relating and learning network of applied knowledge, albeit a very slow one. The eradication of such networks by the Industrial revolution was almost complete; the death list includes wheelwrights, shipbuilders, thatchers, the town-cryer, et al. It’s very interesting to note that among the things that were not eradicated come performance. Performance is one of the few direct lineages of ‘habit’ that we have left; the others include writing, philosophising, music and the rhetorical arts like law and politics.

The second point, that techne can only include making in a material sense is worth opening out a little too. Among the things that Aristotle refers to as great techne is flute playing. The implication in this is that music (and by extension, drama) have the habit, the logos and the making. Clearly we don’t need to say that the ‘material’ need be purely physical. Material can be more open and understood in the way that we tend to now; that is, material is something you use, like a collection of text or a score, to create. The artisan (or musician or performer) has to apprehend the material (understand both technically and, more broadly, creatively) and then make it (perform). The key thing to realise is that techne is situated, embodied, and interpretive. Within the conceptual mode of the Network, techne is one of the old modes of knowledge that gets picked up, as it doesn’t deal in abstraction, reduction or equation-related logics.

One of the best examples of techne at work in this non-physical material sense is in the work of Gregory Ulmer, a media-theorist and writer who has done much work in the area of the emergence of ‘new’ media and the elucidation of guiding metaphors in new modes of creation. In ‘Heuretics’ (1994) Ulmer gives a bravura performance as a practitioner of techne. Heuretics proceeds in various ways, all of which can be seen to be driven by Aristotle’s four causes. Loosely, the things he does are these: He uses Descartes’ ‘Treatise on Method’ as material to extract his own method from, ‘mystory’ and ‘chorography’ (the material cause), which he also uses in his teaching (the formal cause). He applies this method to a work which he is going to perform at a Family Reunion (final cause), and spends a lot of the book telling us about this process (the efficient cause). It’s like Ulmer both describes techne, through the recollection of the process of making the performance, but also does it in that sense that that process becomes the material from which he makes his process of chorography:

“They were designed to simulate the experience of invention, the crossing of discourses that has been shown to occur in the invention process. Realizing that learning is much closer to invention than to verification, I intended mystoriography primarily as a pedagogy.” (Ulmer 1994)

Bearing in mind that a discourse can be material and invention is making, then Ulmer is using techne in it’s pure sense; his act of mystoriography is ‘a habit with a true logos concerned with making’.

It’s interesting to note that Ulmer’s techne takes place across several forms that have resisted industrialisation; teaching, performance, philosophising. In fact it is precisely those habitual practices which have non-physical material that have survived the Industrial Revolution relatively intact. Performance is a full practice in the mode of techne and as such it cannot be subsumed by Industrial technology as we know it, though our study of the ACTOR suggests it may be possible with Information technology. Of course we can study performance as a (more or less) social-science, a cultural study or as a textual study of literature, but this is not Performance in the mode of techne. And this is where Universities (among others) get it wrong; the text of a performance is not an artifact in the study of literature, and it is not a material to be made into performance, it is both.

To take a more general outlook then, performance (and other forms of techne) are closer to the root meaning of technology than information technology itself is. In a way, that’s entirely meaningless, as technology as a concept has so thoroughly reinvented itself as to be not apprehensible solely by this notion of techne. However, what we are left with is a sense that performance and other practices in the mode of techne include ways of representing knowledge that are actually more in tune with the conceptual paradigms of the Network than most knowledge representations as carried out by the IT field. This is why, I think, that knowledge in the form of stories is becoming more interesting to IT researchers: not necessarily because everyone wants to ‘tell stories’ as such, but because the making of stories is one of the ways that very old knowledge about how the world (and we) work is carried. Not the knowledge within the story (the moral or lesson) but the mode of making the story in the first place, ‘a habit with a true logos concerned with making’.

Antidote Four: Deal with Evolution


Conclusion: a theatre in formation.

Given that we’ve covered a huge amount of ground and that we’ve looked at things in a two-fold manner (firstly through the specifics of the ACTOR, secondly through a series of more open and opinionated topics), it seems to me that we can conclude the first but not the second.


The ACTOR is a firm possibility. We may be some years away from being able to give it an A+ on the scorecard, but variations and smaller versions will drift onto the horizon and then into our lives in the next couple of decades. It’s possible that IT reaches some sort of technical ceiling, but the advances in evolutionary computing w ill probably be responsible for some very big changes that may overcome the kind of problems, such as the Frame Problem, that stand in the way of ACTOR.


The key thing here seems to be this though: If we can imagine it, as I have done here, then we can, in all probability, make it. Whether or not the ACTOR is used in the right kind of ways or not, whether it simply becomes SOLDIER or NURSE, is a considerably more open question, and one that I am fairly pessimistic about. My reason for this pessimism is the cash question; until technology ‘makes it to the desktop’ most artists, sadly, don’t get near it. By this time it’s pretty much too late. The hope here is that the powerful research agencies can somehow open out to non-commercial and non-military entities while keeping their funding bodies happy, and this will largely happen only if research bodies are granted the degree of autonomy that seems to be disappearing as they become more commercially ‘accountable’ for their actions.

Problems and Antidotes

In the second part of theatreinformation I tried to locate the ACTOR in some broader frames of reference, and touched on topics like politics, the Virtual Class, VR, the history of cyberspace, performance, the concept of techne, evolution, and Aristotle’s Poetics. Let’s briefly summarise some of the points that I’ve made along the way that make ACTOR a scary thing:

New ‘reality engines’ like VR do not just appear out of nowhere and enjoy complete free-agency, but are apprehensible by other reality engine such as cinema, theatre, literature and the rhetorical arts.

Antidote Five: there is no antidote.

If we can accept all of my above conceptual movements and leaps, then we should be left with something like this:

The current mode de eon is active in several ways: it uses the analogy and methods of the Genetic Imperative, the techno-logic of the Network, and the broad metaphor of Pattern.

This mode will have a fascistic element, as well as forces that will contest that fascism.

The contesting of the mode can always be seen to be part of the functioning of the mode itself, as it will obey the mode it contests in it’s own terms, or be irrelevant.

The above statements, applied to theatre, have the following correlatives:

It therefore follows that the problems posed by the ACTOR, and the mode of Genetic/Network/Pattern in general, have no antidote, because the problem and the solution are always two members of a series that are spiraling around each other; the terms of what is a success and desirable or a failure and undesirable are open to assignation by any concerned force or group of forces (such as a theatre group or a research group). It is therefore up to willing theatre practitioners to knowingly situate themselves in all the complexities of a broadly Informational/Genetic Culture at a one-to-one level, i.e.: to inform their own practice with awareness, and that the practice is itself is a kind of awareness (techne) that has affect and consequence at a macro-cultural level.



‘Machinic Phyluum’ is, as De Landa acknowledges, a term originated by Deleuze and Guatarri in ‘One Thousand Plateaus’, where it is used Until the rise of the printing press it was just as easy for a scribe to deal in diagrams or letters, and the manuscripts that dealt with the motions of the planets and geometry were rendered diagrammatically, both because there was no algebra to speak of, and no good reason to use any other method. With the press, however, the diagram was laborious and tedious. ‘Luckily’ the rise of algebra meant that diagrams were replaced by equations, to the point where it is hard to imagine that most of the Greeks knew no math; it is significant that we learn about ratio and Pythagorean math in school, because that was it; a further understanding of anything from Ptolmey to Galileo requires only geometry.

the ‘Turing Machine’ is worthy of a whole book in itself. The startingly simple theory of how a device, consisting merely of a writer that could write a dot or erase one, a reader that could tell if a tape was marked or not, an infintely long length of read/write tape and a very simple look up table, could be used to create a machine that could imitate any other machine. In it’s essence this is exactly the strength of the computer: it is not much by itself, but as a machinic chameleon, it is surpreme.

Both the Turing Test in particular and AI in general sufer from the ‘Frame Problem’, first identified by McCarthy and Hayes in 1969 in ‘Some Philosophical Problems from the Standpoint of Artifical Intelligence’. The problem – at least from the computers point of view – is that as humans engage in dialogues, they make a great many automatic voluntary constraints in their beliefs: Thus a proposition that is true at one stage of a dialogue, may be ‘framed’ to be false at another, depending on context. Seems kinda obvious. This is the underlying reason why AI’s biggest coup is ‘Big Blues’ defeating of a chess master. The rules of chess, compared with the loose structure of language and discourse, are miniscule, and, while I’m not saying chess is easy, it relies on permutation and assessment of possibility.

This is contrary to popular belief; the Internet was not built specifically to decentralise communications protocols because of the possibility of nuclear war, though it did serve very well…

It isn’t in many ‘developing’ countries. In his book ‘On Being Digital’ the other wise interesting Nicholas Negroponte says that the real issue of have and have-not is one of age and not economics. This is pure self-delusion nonsense. Technofear/stress is as much an issue in childrenas it is in the aged, and there are many aged who are very info-progressive.

The big names here are Stanford, MIT, Carnegie Mellon, The University of Birmingham, De Paul, all universities with either enlightened computer science departments or an active role in the development of Artificial Intelligence.

There have been numerous studies of the MOO culture, but perhaps one of the most interesting documents is Roseanne Allequerqe Stones [ ], in which she follows the progress of the ‘rape in cyberspace’ incident of a few years back. The application of perfromance studies to low-bandwidth events (such as phone sex) tends to focus us back to the evocative power of language; it is arguable that a MOO without graphic representation is many times more powerful than that with.

I have had the opportunity to play in such a system, and it’s lots of fun. At the experimenta exhibition within [ ] ‘98, a system was set up by [ ]. You could advance through several screens, effect the tempo and intensity of music. The best thing about it was watching IT geeks trying it out. Some got it completely, you had to use your body, others failed this test dismally and backed off, muttering things about ‘toys’.

Thanks to the excellent book Technobabble for a guide to putting together meaningless strings of jargon!

A footnote on acronyms: CAVE is a SRA (self-referencing acronym) in that it means ‘Cave Automated Virtual Enviroment’. Such trickiness is common in the computer world; for instance the ‘GNU’ public licence (a standard for releasing free software on systems like Linux) stands for ‘GNU’s Not Unix’ and is a TLA (Three Letter Acronym). I would recommend the ‘Hackers Dictionary’ in paperback or downloadable txt file to anyone interested in the arcane and delightful use of english in the computer world. Also ‘Technobabble’ (John A Barry, MIT press 1991), for more of an overview. Interestingly enough recent films that take on the idea of VR reverse the whole notion: In both Dark City and The Matrix, VR is seen as the thing to be escaped, it’s presention is exactly the sense of a world filled with Samsara that must be escaped (through love, of course…)


Tamagotchi is

Kin Selection is

Or, more simply, just run a ‘Virtual Machine’ inside this one, as with using a Java Applet to view examples of Evolutionary Algorithms on the Net. {} Bailey gives the example of Kepler, who waded in data to find his theories versus a Newton who would never condescend to such a thing, preferring the view that data was more correct if it tallied with the equation!

To be fair, Laurel may have expanded her ideas somewhat. Her recent projects include a walk through performance in Canada, and an award-winning multi media product for children – the hardest audience?

I have no idea where I read that…

I’m considering this from my own Masculine embodiment here, not a requisitioned feminist view.

See in particular Virilio “Speed and Politics”

There’s a good argument to be made here for the study of the blur and the drop shadow as defining characteristics of 1990’s design in the same way that colour and organic letter forms were the defining mark of the late sixties hippie culture. is a case in point here, the perfect Virtual company which makes no money keeping its’ workers tied to it through highly volatile company stocks; along with call centres, the perfect Network equivalent of the 19th Century sweat shops.

Deleuze has done a brilliant excavation of the idea of simultaneity in language: The Logic of Sense is, in part, an exposition of the kinds of joining and simultaneity in ‘nonsense’. He places a large emphasis on the difference between a portmanteau word who’s meaning is multiplied and complicated (‘nonsense’), and a conjunctive word where the meaning is merely additive. See also Deleuze Loius Wolfson, or, The Process in Essays Critical and Clinical.