FAQ: Frequently Asked Questions

Why What How Roadmap Tools FAQ Join!

Can a machine really have conscious feelings?

As Christof Koch, a well known neuroscientist specializing in consciousness research, stated in a report on the “Can a Machine be Conscious?” workshop:

“We know of no fundamental law or principle operating in this universe that forbids the existence of subjective feelings in artifacts designed or evolved by humans” - Christof Koch (2001)

Igor Aleksander proposes a theory defining that which would have to be synthesized were consciousness to be found in an engineered artifact1. Junichi Takeno reports having successfully created a rudimentary self-conscious robot2.

While consciousness research has recently come back into vogue, it has a long tradition—in 1827 philosopher Giacomo Leopardi claimed, based on his understanding of contemporary thought at the time, “That matter thinks is a fact”3.

Isn’t this too ambitious?

Two ways in which creating a self-conscious truly intelligent machine can be considered ambitious are: the amount of person hours required for research, development, construction and testing; and the level of scientific understanding and engineering knowledge required.

In recent history we’ve witnessed many demonstrations of community driven and crowdsourced efforts that have yielded artifacts that are well out of reach of the vast majority of individual commercial or academic institutions. To take an extreme example, consider that a study of a common community developed Linux software distribution released in 2011 showed that it consisted of approximately 419 Million lines of source code (SLOC) and estimated the development cost at $19 Billion $US4,5. While the scope of our project is significantly smaller, the example serves to demonstrate that there are few limitations to what can be achieved through the cooperative collective will of people.

What about the required scientific understanding? Certainly it has never before been achieved (—though that is not an argument against it per-se, just as NASA having not sent Astronauts to Mars in 2015 doesn’t imply they didn’t have the knowledge to do so). If you ask a group of scientists when human level artificial intelligence will be realized, you’ll likely hear responses ranging from 5-10 years from now, to never. Various researchers have conducted polls among scientists and AI experts with Müller and Bostrom summarizing some of the results and stating “[…] it is fair to say that the results reveal a view among experts that AI systems will probably (over 50%) reach overall human ability by 2040-50, and very likely (with 90% probability) by 2075.”6

This project is squarely in the optimistic camp. We think many of the estimates are based linear extrapolations of the current rate of progress, while many would argue progress is in fact exponential7. The current rate of progress is also hindered by lack of political will, funding and popular support.

While a complete scientific understanding of natural intelligence as implemented in brains is some way off, we argue that it isn’t a necessary requirement for the development of broadly human-level artificial intelligence (see the following question).

Science doesn’t fully understand the human brain, doesn’t that imply we can’t create an intelligence?

No, it doesn’t. Science certainly doesn’t have a full understanding of the human (or animal) brain. A full understanding would consist of multiple levels of description, with higher levels reducing to, but also partially emergent from, lower levels. Modern Neuroscience does provide a detailed understanding at many levels, particularly at the level of individual neurons, their various types and electro-chemical and dynamic properties. There are also significant bodies of knowledge around brains at the level of micro-circuits, many brain areas and overall function. However, a complete detailed understanding of the brain at all levels, from protein chemistry through human behaviour, is some way off.

We have the advantage of being able to leverage all that is currently known about brains and intelligence through Neuroscience, Cognitive Science and Psychology in addition to all the advances in Computer Science and Machine Learning. This already covers, arguably, the majority of the requisite knowledge. Unlike Neuroscience, which ultimately aims for a complete and detailed understanding of the brain, understanding how to engineer an artificial intelligence is significantly less demanding. In addition, by not needing to understand brains specifically, no single correct theory is required. Instead, where multiple possible theories exist as to how particular aspects of intelligence in brains work, we’re free to engineer several alternatives and choose the best approach by testing performance against our own requirements for an intelligent machine.

While we don’t claim to have all the answers at the outset, we firmly believe that by starting with most of the pieces of the jigsaw in place and actively and systematically proceeding through filling in the missing pieces, that the last few difficult pieces will easily fall into place.

Why hasn’t anyone attempted it already?

While there are multiple reasons, in short, while there are actually several efforts aimed at creating artificial brain simulations and various aspects of intelligence, few aim explicitly at creating general self-conscious human-level artificial intelligence. The impediments are sometimes due to commercial pressures, funding difficulties and life-cycles and other incentives that work against cooperation. Consequently, while there has been steady progress, it remains unachieved.

As Artificial General Intelligence (AGI) researcher Joel Pitt opines:8

Now is the time for AGI because: computers are far better now; our understanding of cognitive science and neuroscience is a lot better now; and our arsenal of computational learning algorithms is a lot better now.
Due to the short-term focus of the current business community, and an anti-AGI attitude on the part of most current government research funding sources, not much R&D work on AGI is currently getting done, in spite of the ripeness of the time for it.
The time is now, the opportunity is here, but due to historical and practical reasons, very few are making a serious effort to grasp hold of the opportunity.

For further detail see the Why? section.

  1. Artificial Neuroconsciousness an Update, Igor Aleksander, proceedings Natural to Artificial Neural Computation, International Workshop on Artificial Neural Networks, {IWANN} ‘95, Malaga-Torremolinos, Spain, June 7-9, pp566-583, 1995 (DOI 10.1007/3-540-59497-3_224).

  2. Creation of a Conscious Robot: Mirror Image Cognition and Self-Awareness, Junichi Takeno, Pan Stanford Publishing, 2012 (ISBN 978-9814364492).

  3. Zibaldone, Giacomo Leopardi, pp Z4288-89, 1827 (English translation: Farrar, Straus and Giroux, 2013; ISBN 978-0374296827).

  4. Debian Wheezy: US$19 Billion. Your price… FREE!, James E. Bromberger, Blog post, 2012 (http://blog.james.rcpt.to/2012/02/13/debian-wheezy-us19-billion-your-price-free/, Accessed 2015-03-29)

  5. Macro-level software evolution: a case study of a large software compilation, Jesus M. Gonzalez-Barahona, Gregorio Robles, Martin Michlmayr, Juan José Amor and Daniel M. German; Journal of Empirical Software Engineering, 2008 (DOI 10.1007/s10664-008-9100-x).

  6. Future progress in artificial intelligence: A Survey of Expert Opinion, Müller, Vincent C. and Bostrom, Nick, in Vincent C. Müller (ed.), Fundamental Issues of Artificial Intelligence (Synthese Library; Berlin: Springer), 2014.

  7. The Age of Spiritual Machines, Ray Kurzweil, Viking, 1999 (ISBN 978-0670882175).

  8. OpenCog Project FAQ Web page http://opencog.org/faq/, posted July 25th, 2010; accessed April 20th 2015.