Close
0%
0%

Egor V.2 Robo-Animatronic

An intelligent adaptive automated robotic system

Similar projects worth following
The robot can track movement using a kinect sensor and respond to questions via a wireless keyboard, however, voice recognition can be implemented using macs dictation and speech function in system preferences via the kinects mic set up. The robot uses an adapted version of the Eliza algorithmic framework (siri derivative) to respond to participant questions, The 'script' is then out-putted to apple scripts voice modulator so it can be heard through the robots internal speaker system. The application of the project is suited for museum displays, help desks or interactive theatre. The Eliza framework can be changed to mimic any individual or language (Currently based on Marvin from T.H.H.G.T.T.G) and answer complex questions, making this a highly interactive and knowledgeable system. The voice can also be outputted and modelled on specific individuals and the mouth and lips react to the sound coming into the computer's audio output so it is always more or less in time.

Full Description:

The title of my final year Multimedia Design project (Actuality and Artificiality) is an adaptation of the post-modernist philosopher Jean Baudrillard’s ideology, Simulacra and Simulation: Simulacra as object and Simulation as process. The robot can track movement and respond to questions via a wireless keyboard, however, voice recognition can be implemented using macs dictation and speech function in system preferences in-putted via the xbox kinect's internal quad mic set up. The robot uses an adapted version of the Eliza algorithmic framework (siri derivative) to respond to participant questions, The 'script' is then out-putted to apple scripts voice modulator so it can be heard through the robots internal speaker system. The application of the project is as an interactive exhibit, but this project is highly adaptable and would be suited for museum displays, help desks or interactive theatre. The Eliza framework can be changed to mimic any individual or language (Currently based on Marvin from Hitchhikers Guide to the Galaxy) and also answer complex questions, making this a highly interactive and knowledgeable system. The voice can also be outputted and modelled on specific individuals and the mouth and lips react to the sound coming into the computer's audio output so it is always more or less in time. The system tracks peoples movement via a Kinect module, the current system uses an open software library that tracks the nearest pixel to the sensor, however, the script also includes skeleton tracking output which can be activated via un-commenting the skeleton tracking code and commenting the point tracking library. This means that multiple individuals can be tracked and interacted with at once, allowing for larger audiences.

Abstract: Theory

Beginning with Tom Gunning’s theoretical model of cinema as a machine for creating optical visceral experiences, the core proposition of this study stipulates that pure CGI characters no longer have the ability to accurately simulate consciousness and materiality, or to meet the expectancies of the modern day cinematic audience. It has been claimed that the movement towards Hybrid systems (Motion Capture / Live action integration) provides a form of mediation between actuality and virtuality, adding depth 'soul / consciousness' and a kinaesthetic grounding of external operations in an attempt to solidify and reify the virtual image into something organic. However, it is suggested here that hybrid systems have problematic issues concerning inaccurate approximation of: surficial reflection, portrayal of additional appendages, incomplete character formation (interaction / performance) and encapsulating / staying true to an actor’s performance during editing. The imprecision of these elements become increasingly apparent over time - especially at close proximity, where it becomes discernible via the evolving critical eye of the average modern day cinematic observer.

This projection positions Hybrid systems not merely as mediation between physical reality and holographical dimensions but as a means of returning to the greater substantial and grounded animatronic character systems. Modern animatronic characters / puppets exhibit greater detailed aesthetic verisimilitude and organic simulative, external and internal operations at close proximity in comparison to the most advanced CGI and Hybrid systems as they are augmented via the parameters of the physical world. This research explores a possible return to animatronic special effects in the future of film as the primary medium for character creation, overtaking CGI and other virtual hybrid systems which lack the ability to propagate visceral optical experiences, fine detail / nuance and genuine/responsive characters to meet the evolving critical expectations of the cinematic observer. Technological advancements in animatronic interactive control systems allow accurate tracking of movement, autonomous extemporaneous expressions...

Read more »

scpt - 3.86 kB - 05/27/2016 at 08:02

Download

sketch_may28a.ino

Arduino side.

ino - 6.91 kB - 05/27/2016 at 08:02

Download

pde - 10.53 kB - 05/27/2016 at 08:02

Download

  • 1 × Check details for full list.

  • Ident Added

    Carl Strathearn05/30/2016 at 09:18 0 comments

    Added a logo iv'e been working on !

  • New stuff added today !

    Carl Strathearn05/28/2016 at 22:34 0 comments

    Today I added some more diagrams and uploaded a fancy gif for you all. The gif (mind cogs) was originally the cover of my special study into artificial intelligence which was conducted as part of my first degree. I added this because I wanted to highlight the importance between the theory and practical elements of this project. I hope this gives the project further substance and draws not only on the mechanical side but the artistry behind what I am trying to achieve.

    Carl

  • A bit about blue prints and costings.

    Carl Strathearn05/27/2016 at 23:59 0 comments

    Over the next few days I will be putting together some 2D diagrams to give you all a better understanding of how the different elements go together.

    (Example)

    The best thing about this project is that the set-up and mechanisms are very easy to put together and change out if problems such as servo failure ect occur. The main reason for this positioning is unlike my other project (Aldous), Egor was made for autonomous live entertainment and interaction. So ..for example, instead of using the standard 4 servo set up for the eye controls (two sets linked via splitters to function simultaneously), I only used two servos as it is easier to change out and identify issues with two servos rather than four. There is no 3D print technology used in Egor V.2, I have used 3D printing on my latest model but for the sake of opening this up to a wider audience and allowing access to those without this technology makes Egor V.2 accessible to all.

    This process also brings the costing of the project down, as this was developed as part of my first degree I was pretty much broke most of the time. So I had to budget and prototype using materials I could get quickly and cheaply. I would say on the whole the project cost around £330 pounds to construct and develop over the course of a year. I hope it will cost you a lot less as I have done a lot of the prototyping of mechanisms and design for you and tried my best to get the best possible results using the materials I had at hand. . . this is not bad since it was valued by my university at £1,200 pounds after my final year exhibition in 2014. . . now it's time for you to put your stamp on it and make it yours.

  • Egor V.2 was the foundation of my latest work Aldous (EMS-30.02)

    Carl Strathearn05/27/2016 at 18:10 0 comments

    Egor V.2 was the foundation for my latest animatronic project Aldous. Here is a video of Aldous in action !!! what can you make from Egor's tutorial ?

  • Theoretical Model (Abstract from my masters thesis)

    Carl Strathearn05/27/2016 at 17:05 0 comments

    I am going to share this with you as it grounds the purpose of my practical work in theory.

    Abstract

    Beginning with Tom Gunning’s theoretical model of cinema as a machine for creating optical visceral experiences, the core proposition of this study stipulates that pure CGI characters no longer have the ability to accurately simulate consciousness and materiality, or to meet the expectancies of the modern day cinematic audience. It has been claimed that the movement towards Hybrid systems (Motion Capture / Live action integration) provides a form of mediation between actuality and virtuality, adding depth 'soul / consciousness' and a kinaesthetic grounding of external operations in an attempt to solidify and reify the virtual image into something organic. However, it is suggested here that hybrid systems have problematic issues concerning inaccurate approximation of: surficial reflection, portrayal of additional appendages, incomplete character formation (interaction / performance) and encapsulating / staying true to an actor’s performance during editing. The imprecision of these elements become increasingly apparent over time - especially at close proximity, where it becomes discernible via the evolving critical eye of the average modern day cinematic observer.

    This projection positions Hybrid systems not merely as mediation between physical reality and holographical dimensions but as a means of returning to the greater substantial and grounded animatronic character systems. Modern animatronic characters / puppets exhibit greater detailed aesthetic verisimilitude and organic simulative, external and internal operations at close proximity in comparison to the most advanced CGI and Hybrid systems as they are augmented via the parameters of the physical world. This research explores a possible return to animatronic special effects in the future of film as the primary medium for character creation, overtaking CGI and other virtual hybrid systems which lack the ability to propagate visceral optical experiences, fine detail / nuance and genuine/responsive characters to meet the evolving critical expectations of the cinematic observer. Technological advancements in animatronic interactive control systems allow accurate tracking of movement, autonomous extemporaneous expressions (programmable level), voice recognition: recording / response technology, exact precision of kinetic functions with meticulous coordination and the ability to continually repeat sequences of action. In addition to these properties there is potential postproduction value via adaptation (Interactive Rocket Raccoon, Guardians of the Galaxy 2014 promo: Tetsushi Okuyama). Further reinforcing this theoretical position, the film, Harbinger Down (2015) became the first successful publicly backed Kickstarter campaign for a cinematic feature to exhibit animatronic characters as the primary special effects medium (3,066, backers pledging $384,181). The major Hollywood production, Star Wars: The Force Awakens (2015) has demonstrated a return to practical and animatronic special effects over the predominant modern orthodox virtual progression, grounding the possibility of an animatronic renaissance.

  • Introduction and history

    Carl Strathearn05/27/2016 at 17:01 0 comments

    I'm going to use this Space to tell you a little bit about adventures of the robot so far. We have attended exhibitions, university lectures, club talks and festivals.

    I am a passionate animatronic practitioner, at the end of the my masters degree via research in animatronic system studies. I have had a long standing love of animatronic characters growing up, from the first time I saw the film the Dark Crystal up to modern day character systems like the ones presented in the new Star Wars films.

    I am very proud to say that I have interviewed some of the very best Hollywood special effects engineers and gained valuable insights into a very unique and often closed off industry. As far as I am aware I am the only academic who is taking animatronic theory as a serious in-depth study. I will share with you some of my findingings in the next log.

    Best wishes.

View all 6 project logs

  • 1
    Step 1

    I listed all this in details as I found it flows better.

View all instructions

Enjoy this project?

Share

Discussions

Similar Projects

Does this project spark your interest?

Become a member to follow this project and never miss any updates