Close

The Avatar Program

A project log for Horizon

The next step in creating Virtual Reality.

walkerdevWalkerDev 04/23/2019 at 01:310 Comments

When it comes to technology, humans have been able to be far away while being in their location. If this is possible, why is it that there can not be 1 user in 2 locations? This would require a user, a willing catalyst and a connection between the two.

The Avatar system is a program dedicated to creating said catalyst, while having a virtual version of the user. This is a simple yet advanced process that may be available to all in the near, yet far future.

The system is compromised of 5 main components. This includes

- A Neural Interface that scans and analyzes the brain

- A virtual core program that holds an AI heavily based off of the user

- The Virtual Spindles that allow said Avatar to exist in / interfere with things in the real and virtual world

- The structural system that allows for the Avatar's body and movement functions

and most importantly

- The synthesis program that generates AI from learning and development over years in the virtual world (VIA rate of time increase) rather than Input/Output data.

It'd be good to look at each individual component and study it further. Let's start with the Neural Interface.

- A Neural Interface that scans and analyzes the brain

      A Neural Interface (or BCI) is a system that connects the brain to an external device, whether it be a computer, device or set of servos (for example). As of right now, the most powerful BCIs are capable of allowing prosthetic usage, the control of objects and the control of some machinery. 

      Most BCIs use EGG, EKG or some other sort of brain sensor that outputs VIA waves. The Vector Gear, on the other hand uses a 3d brain scan ring that creates a real-time brain image and plugs it into a virtual brain, being able to be asses ed by entities that exist in the Horizon. These include AI Alpha, Beta, Iota, Sigma and can form much more advanced AI if needed, but that is a lecture for a different day.

The AI there are capable of registering parts of the brain, aligning it with an AI and at some cases being able to be used as a health indicator. Speaking of AI, 

- A virtual core program that holds an AI heavily based off of the user

     Thanks to the wonder of Game Engines, Visual Studios and modern research, it is possible to create an item in the real world and the virtual world. For example, you could connect an Arduino to Unity, then have the electricity flowing through it be represented by a color change and only have the color on for wires with electricity running through them. This is the same approach used for the Virtual Core.

      The Virtual Core is a system that implements the usage of BCI and Virtual Engines to create a stimulated brain. At the time of this writing, there has yet to be a fully finished brain, but brain scans, Nengo AI and a virtual brain is being implemented into Unity at an attempt to create an AI based off of a user. While creating AI VIA said way is easy, the hard part is the collection of memories and matching brain activity to be like the user, while having the avatar be a separate entity. This  especially becomes noticeable with

- The Virtual Spindles that allow said Avatar to exist in / interfere with things in the real and virtual world

     As the real world is overlapped with the virtual world and constantly watched by your Avatar, the AI can interact with items as long as the user does and/or it exists in the virtual world. This allows for the AI to learn more and create a more detailed bridge between neurons. The ability for AI to do this is the most important in the first stages, where the AI is given a catalyst brain and shapes it to have more advanced bridges between neurons. In fact, this could theoretically be key to creating conscious AI.

      The connections between neurons also allow for more capabilities while subjecting AI to several limiters like short and long term memory. These weaknesses are needed in order to have much more efficient AI. While most reading this may think otherwise, it is important to remember that memory is limited and the usage of a short and long term memory is what brings out intelligence at it's most. This can be tested to any rate, especially with the increase of perception rate for Avatars. This comes from

- The synthesis program that generates AI from learning and development over years in the virtual world (VIA rate of time increase) rather than Input/Output data.

      The synthesis program is a basis for the AI that does to an AI what evolution would. Instead of having an AI spend months trying to fix itself to a certain point, an increased perception rate can be used. Where an AI would spend a month trying to find a new way to connect it's brain interface, time could be manipulated to such a point that it goes X1000 faster than actual time. The output of this allows for what could've taken a few hundred years to happen in a week or two. The limit would be the supercomputer needed to create an initial brain, but Walker Industries will take care of that.

      This also brings in new opportunity, such as having a system where the inhabitants may not even be real people, yet act, look and think that they are. That could be seen as an opportunity due to the ability to have AIs that could take rational and humane actions, instead of AI today. Where you could tell an AI to throw something away, it could take a vase as a garbage due to the opening in it and put junk in there. In the other hand, the AI shown could see that the vase is a vase due to prior knowledge and experience with actual vases. This would also be the first time that humanity had other humanoid sentient life on Earth, but that is not the point that is being hit yet, but it will fully be researched soon. 

      As time goes over, the Avatar can be given a catalyst body to the real world and the user can control the Avatar mentally. With that, comes the final component

- The structural system that allows for the Avatar's body and movement functions

     Said before, the AI has a virtual body, but unlike what you may think, the 3d body is given realistic modules such as bones, a nervous system and of course the brain. This is all created by using the SEED program, which will be discussed another day. The usage of this allows for the AI to have a bigger knowledge basis and be more approachable. With pain, it understands the risks of certain machinery and appliances that a regular AI would not be able to observe if not in a database. With happiness, it could better find a more preferable option when given a choice. For example, it could decide that one product is better from watching it and observation, rather than sole ratings, which prove to be false at times.

      The system too allows the Avatar to host itself in a robotic body. If the user willed it, they could have their Avatar in this reality as an actual being, but that is a conversation for the future.

While there are these 5 main components, in the next log, a look into the aesthetics of the avatar and their manifestation will be further looked into. if you are willing to have an Avatar made from a photo of you and put here, please message me.

We will also look into what exactly the Vector gear is.

-Until the solstice, 

TKTS

Discussions