top of page

PATENT NO. 14/253,861:
Synthetic Consciousness
is Coming Our Way.



MAJOR SCIENTIFIC MINDS fear AI. The dire warning now coming from the likes of billionaires Elon Musk, Peter Thiel, genius Stephen Hawking, and other very credible thinkers, is that eventually something like Skynet (Terminator) or Robots (I, Robot starring Will Smith) will be developed, turn on us, and destroy humanity. Looking at the blinding speed of advancement in the field of Artificial Intelligence, that fear appears to be well justified.


But what if these tech geniuses are actually late to the problem? What if someone already has been working on this for the last decade?


They are soon going to discover that reliable Artificial Intelligence is attainable for machines, and not necessarily the threat that they perceive, because of the work of Dr. Alan Rosen. They just need to find him. Dr. Rosen has been quietly working on the challenges that the next great leap in AI will bring, and claims to have the patents and solutions in place to save us all from the threat of smart machines going rogue.


Dr. Rosen is an old-school scientist. After teaching graduate-level space physics at USC and UCLA, once upon a time he was third in command at TRW (now Northrop Grumman), where he was the lead space scientist responsible for devising outlandish experiments for some of NASA’s most successful missions of the era, including Explorer VI, Pioneers I, II & III, Pegasus and NASA’s Relay and Communication satellites.


Since retiring from aerospace, Dr. Rosen has spent the last 20 years quietly collecting patents on his latter-life’s passion: human-based robotic controllers (controller refers to the robotic brain as opposed to the robotic body that it controls). Together, with his engineer sons at his home-based Southern California skunkworks that they call Machine Consciousness Inc. (or MCon), Dr. Rosen’s work is a game-changer.


But first, some industry background is in order: There are some very big players in the sandbox of humanoid robotics including Google/Boston Dynamics, Honda, NASA, some crack university labs here in the U.S., and some truly impressive robotics teams working in Korea, Japan, Europe and China. Some prime humanoid robotic specimens recently showed off their skills earlier this year at DARPA’s famed Robotics Challenge.


These latest creations are, to say the least, impressive (if not downright frightening) in their balance, dexterity and coordination, and they are all the results of many years and millions of dollars of R&D. Much of this investment has been devoted to the structures, servos, gyroscopic balancing systems, power systems and other hardware structures that are analogous to the bones, muscles and basic physical and sensory systems of a human being.


But what about the brains?


Beyond the bodies, robotic controllers are being equipped with complex neural networks, loaded with algorithms to coordinate the motors and sensors, to perform feats such as bipedal walking on uneven surfaces, articulated object manipulation, obstacle avoidance and a slew of other nifty tricks.


But while all this makes for impressive YouTube videos of robots demonstrating their skills onstage or on a closed course, most robots today remain severely limited to functioning within carefully controlled settings such as factories and laboratories, and to performing small, well-defined repetitive tasks –either that, or they work at the command of a good old-fashioned human behind a remote control. As impressive as they are, today’s robots still have trouble autonomously dealing with the kind of unpredictable, unstructured environments that real life delivers, and thus they require constant supervision. They remain a far cry from, say, Star Wars’ C3PO, the humanoid robot that we all yearned to be pals with since we were kids.


One sector that is acutely focused on the challenge of robotic autonomy is that of the self-driving cars which are being aggressively developed by Google, Tesla, Mercedes, BMW, Baidu and others. These vehicles utilize an array of sensors and GPS navigational systems to autonomously drive to a destination, while being aware of (and hopefully avoiding) the cars, pedestrians, dogs, stop signs and objects in and around their path. The current technology is already promising to be far safer than human drivers who are easily distracted by stress, cell phones, attractive pedestrians and vodka martinis. But once again, even though self-driving cars have autonomous attributes, they still perform the single well-defined task of safely getting a vehicle from point A to B.


This brings us to yet another exciting subset of AI that can be described as "learning robots". The promise of learning robots is that these machines can utilize advanced neural networks to avoid lengthy programming processes, that in any case only work within limited predictable parameters. So, for example, rather than having a team of programmers write a million lines of code in an attempt to prepare a robot to deal with a few carefully anticipated scenarios, imagine instead teaching a robot in a classroom via demonstration just as we do with a human child, and enabling the robot to learn through trial and error. The robot essentially writes its own code, or programs itself. For example, UC Berkeley is working on such a robot it calls Darwin. Darwin is teaching itself to walk through a self-programming process of trial and error, and it is truly a trip to witness (no pun intended). And yet, while the idea of a robot teaching itself to walk is undeniably ground-breaking, Darwin and others in this sub-specialty of learning robots are still performing a set of highly limited tasks that still make them a far cry from the mythical sentient robotic celebrities like Tony Stark’s J.A.R.V.I.S., HAL 9000 or the lesser known CHAPPiE (which was actually a better movie than it was given credit for). This brings us to the holy grail: fully autonomous humanoid robots.


Some robots are programmed to balance, to walk upon uneven surfaces, to avoid obstacles and ambulate to a particular point –but can they decide on their own where they are walking to and why? Can they determine their own mission? Imagine a robot that can embark on its own exploratory adventure, that will have an intelligent and meaningful conversation with you. Imagine a robot that can start one task and then change its mind and begin another. Imagine a robot that can think creatively, and utilize learned judgement to determine the right action to take in an unanticipated situation. In the common vernacular this is referred to as “free will”, and introduces us to the exciting and terrifying world of fully autonomous humanoid robots.


Your first question might be “Why on Earth would anyone create such a thing?!” and if you’ve seen any of the aforementioned apocalyptic robot movies, that would not be a bad question. The short answer is that they’re going to be invented by someone sooner or later, and better that we, the good-guys, supervise their creation rather than let the evil scientists rule the robot army (and have all the fun).


This brings us back to MCon. Dr. Rosen likes to say that he has spent the last 15 years reverse engineering the human brain. The result is his portfolio of patents and published papers, including the “Relational Robotic Controller”; the “Intelligent Auditory Humanoid Robot and Computerized Verbalization System Programmed to Perform Auditory and Verbal Artificial Intelligence Processes”; and the “Intelligent Visual Humanoid Robot Computer Vision System Programmed to Perform Visual Artificial Intelligence Processes”. Collectively, MCon’s systems take a radically different approach to the puzzle of robotic artificial intelligence to produce what Dr. Rosen refers to as synthetic consciousness.


The method by which this feat of unholy creationism is achieved is through the miracle of MCon’s Relational Robotic Controller, affectionately referred to as the self-circuit. In laymen’s terms, the self-circuit is MCon’s proprietary method by which its robot relates all that it perceives (through its visual, tactile and auditory sensors) to a miniature image of itself –a homunculus (ho·mun·cu·lus), that resides within its neural network. The homunculus is a miniature representation of the external boundaries of the robot’s body, such that anything that happens to the actual body is interpreted as happening to the robot’s ‘self’. You read that correctly: this robot has a sense of self.


An MCon robot’s early training consists of a process by which it first learns its own boundaries –where its self ends, and where the external world begins. Hold your right hand out in front of you, and with your thumb touch all the other fingers on that same hand. Notice that with each touch, you are feeling two pressure points (one pressure point on your thumb and one on the finger that is touching your thumb). This is how you know you are touching your self, and it tells you that both points are a part of you. This is quite different from the feeling of touching an external object with one fingertip, in which case you would only feel one point of pressure instead of two. That is but one method by which the MCon infant learns to differentiate what is a part of its own body vs. what is external to it: when it feels two pressure points in its infant learning phase, it then self-programs an internal image of its own boundaries. Once the homunculus programming is complete, the robot then relates all external input to itself rather than just recording empirical data as a lesser robot would (more on that in a minute). In addition to tactile sensors, the MCon robot is also equipped with auditory and visual interpretational systems that relate all respective stimulus to its robotic self, and thus we end up with a learning machine that is radically different from anything else being developed.


To illustrate the uniqueness of this approach, let’s imagine that a lesser robot (that is equipped with a visual system and an array of pressure sensors) is standing dormant in the middle of your living room. In this state it is pretty much the same as an old toaster: a machine waiting to receive a command from a program or a remote operator. By contrast, when an MCon robot is standing in the same room, it is very busy looking around, taking in its surroundings, and internally relating all that it sees to itself.  It sees the doorway and relates to it. Without moving a single motorized muscle, it thinks to itself:


“that is a doorway”
“that doorway is 10 feet away from me”  
“I can fit through that doorway”
“It will take me 3 seconds to get to that doorway if I were to walk to it”


In short, this is a conscious robot. And just like a conscious human, this robot is quite interested in its surroundings. If its auditory sensors detect a voice, it thinks to itself:


“where did that voice come from relative to me?”
“what did it say and was it talking to me?”
“should I react to the voice with an action, a sound or should I ignore it?”


When a lesser robot is executing a program to walk from point A to B, and crashes its foot into a potted plant, it might coldly register that collision as a pressure at the location of the sensor on its foot, then back up until the pressure is relieved, then turn to the side and proceed on its way and repeat as necessary; but when an MCon robot stubs its toe, that pressure signal is related to the internal homunculus, such that the robot interprets that it is “his toe” feeling that pressure. When the MCon robot looks down to investigate the collision, it recognizes its own foot, and differentiates it from the external object. It will then back up and walk with adequate clearance between its foot and the obstacle. This is a subtle but profound difference in sophistication: rather than simply calculating a distance to an object as a piece of data, the MCon robot instead says “I am x distance away from an object and I know that object is not a part of me. I will move towards, move to avoid or simply ignore that object.”  This self-realization is achieved by virtue of the Relational Robotic Controller, and makes for a very unique synthetic being.


In theory, because it has a ‘self’, an MCon robot can ultimately be trained to walk, speak and understand language in a more human-like manner than any other robot in existence. It can become coordinated to grasp, throw and catch through a repetitive self-programming process. It can learn to recognize you, and differentiate one person from another by observing their attributes and relating them to itself. It can have a relationship with you. In short, it is C3PO realized -or might it instead be the Terminator realized?


Dr. Rosen believes that his theoretical robot comes the closest to being the threat that Musk, Hawking and the others fear, for it truly will have the capacity for volition. But this admission is precisely why Dr. Rosen is best positioned to prevent that threat from being realized. As explained by the good doctor, MCon’s robot will indeed have free will, but that free will can be limited by a finite number of programmable paths from which it can freely choose from. This is analogous to a mouse running through a maze, in that while that mouse indeed has volition, the structure of the maze limits its choices between left, right, straight, backwards and stopping. Similarly, Dr. Rosen assures us that limiting the actions of a volitional robot can be accomplished with an array of fail-safes that enable it to work within the strict confines that we, the good humans, determine. We hope he is correct.


Want to see an MCon robot in action? Unfortunately, (or fortunately as the case may be), you are going to have to wait a while. MCon is currently seeking partners to fund its next level of R&D, to help forge strategic alliances with complementary robotics labs, and to transform its patents, code, circuits and neural network designs into an operational robot. Dr. Rosen estimates that with minimal funding, MCon can have a Phase 1 prototype in as little as five years. The possibility alone is pretty enticing.

Building the world’s first sentient robot is obviously a colossal undertaking, but remember that this guy has launched multiple satellites into space, so he knows a thing or two about large scale engineering projects. It is Dr. Rosen’s belief that his innovative approach to robotic self-awareness will serve as the foundation for the next century of humanoid robotics. Even if there is 1% chance that he is correct, we are hopeful that there are more than a few people in the sandbox who may be interested in exploring that possibility with him. ~ 



For more info contact:

bottom of page