Artificial intelligence

Artificial Intelligence

Artificial intelligence is a field that attempts to provide machines with human-like thinking.

History Edit

But despite some significant results, the grand promises failed to materialise and the public started to see AI as failing to live up to its potential [this is not impersonal, this is a opinion from someone, hence this is wrong]. This culminated in the «AI winter» of the 1990s, when the term AI itf fell out of favour, funding decreased and the interest in the field temporarily dropped. Researchers concentrated on more focused goals, such as machine learning, robotics, and computer vision, though research in pure AI continued at reduced levels.

However, computer power has increased exponentially since the 1960s and with every increase in power A.I. programs have been able to tackle new problems using old methods with great success. A.I. has contributed to the state of the art in many areas, for example speech recognition, machine translation and robotics.

Historically there were two main approaches to AI:

  • classical approach (designing the AI), based on symbolic reasoning — a mathematical approach in which ideas and concepts are represented by symbols such as words, phrases or sentences, which are then processed according to the rules of logic.
  • a connectionist approach (letting AI develop), based on artificial neural networks, which imitate the way neurons work, and genetic algorithms, which imitate inheritance and fitness to evolve better solutions to a problem with every generation.

Symbolic reasoning have been successfully used in expert systems and other fields. Neural nets are used in many areas, from computer games to DNA sequencing. But both approaches have severe limitations. A human brain is neither a large inference system, nor a huge homogenous neural net, but rather a collection of specialised modules. The best way to mimic the way humans think appears to be specifically programming a computer to perform individual functions (speech recognition, reconstruction of 3D environments, many domain-specific functions) and then combining them together.

  • genetics, evolution
  • Bayesian probabily inferencing
  • combinations — ie: «evolved (genetic) neural networks that influence probability distributions of formal expert systems»

By breaking up AI research into more specific problems, such as computer vision, speech recognition and automatic planning, which had more clearly definable goals, scientists managed to create a critical mass of work aimed at solving these individual problems.

Some of the fields, where technology has matured and enabled practical applications, are:

Some examples of real-world systems based on artificial intelligence are:

Computer vision Edit

Things that computer vision is currently good at as of 2007 include:

1. Detecting human faces in a scene.

2. Recognizing people from non-frontal views.

3. Determining the gaze direction of someone with high accuracy.

4. Recognizing people as they age, wear a hat, shave, or grow a beard.

5. Recognizing whether a face is that of a male or female, or a person who is young or old, or just about any other kind of discrimination?.

6. Compensating for camera motion in tracking objects.

7. Forming geometric models of objects.

8. Determining the rough three-dimensional structure of a scene over a distance of six meters.

Things that computer vision is still not good at as of 2007 include:

1. Recognizing what people are wearing.

2. Determining the material properties of something that is viewed.

3. Discriminating general objects from the background.

4. Recognizing general objects.

6. Recognizing emotion.

7. Gesture recognition.

(number of words)

Top 10 Automatic Speech Recognition Software Edit

  • Dragon Naturally Speaking
  • IBM Viavoice
  • TomTom
  • Windows Vista
  • Windows 7
  • Siri

Ongoing projects Edit

Cyc is a 22 year old project based on symbolic reasoning with the aim of amassing general knowledge and acquiring common sense. Online access to Cyc will be opened in mid-2005. The volume of knowledge it has accumulated makes it able to learn new things by itself. Cyc will converse with Internet users and acquire new knowledge from them.

Open Mind and mindpixel are similar projects.

These projects are unlikely to directly lead to the creation of AI, but can be helpful when teaching the artificial intelligence about English language and the human-world domain.

Artificial General Intelligence (AGI) projects Edit

  • Novamente is a project aiming for AGI (Artificial general intelligence).
  • Adaptive AI a company founded in 2001 with 13 employees [1].
  • Other projects: Pei Wang’s NARS project, John Weng’s SAIL architecture, Nick Cassimatis’s PolyScheme, Stan Franklin’s LIDA, Jeff Hawkins Numenta, and Stuart Shapiro’s SnEPs.

Future prospects Edit

In the next 10 years technologies in narrow fields such as speech recognition will continue to improve and will reach human levels. In 10 years AI will be able to communicate with humans in unstructured English using text or voice, navigate (not perfectly) in an unprepared environment and will have some rudimentary common sense (and domain-specific intelligence).

We will recreate some parts of the human (animal) brain in silicon. The feasibility of this is demonstrated by tentative hippocampus experiments in rats [2] [3]. There are two major projects aiming for human brain simulation, CCortex and IBM Blue Brain.

There will be an increasing number of practical applications based on digitally recreated aspects human intelligence, such as cognition, perception, rehearsal learning, or learning by repetitive practice.

Robots take over everyones jobs [4]

During the early 2010’s new services can be foreseen to arise that will utilize large and very large arrays of processors. These networks of processors will be available on a lease or purchase basis. They will be architected to form parallel processing ensembles. They will allow for reconfigurable topologies such as nearest neighbor based meshes, rings or trees. They will be available via an Internet or WIFI connection. A user will have access to systems whose power will rival that of governments in the 1980’s or 1990’s. Because of the nature of nearest neighbor topology, higher dimension hypercubes (e.g. D10 or D20), can be assembled on an ad-hoc basis as necessary. A D10 ensemble, i.e. 1024 processors, is well within the grasp of today’s technology. A D20, i.e. 2,097,152 processors is well withing the reach of an ISP or a processor provider. Enterprising concerns will make these systems available using business models comparable to contracting with an ISP to have web space for a web site. Application specific ensembles will gain early popularity because they will offer well defined and understood application software that can be recursively configured onto larger and larger ensembles. These larger ensembles will allow for increasingly fine grained computational modeling of real world problem domains. Over time, market awareness and sophistication will grow. With this grow will come the increasing need for more dedicated and specific types of computing ensembles.

  • Invention
  • first AI laboratory
  • chess champion
  • speech recognition
  • autonomous humanoid robots
  • Turing test passed (won’t happen in our lifetimes, turing test is flawed)

Artificial intelligence (AI)

Published: 29 Oct 2017

Facebook translates ‘good morning’ into ‘attack them’, leading to arrest

Published: 24 Oct 2017

Tim O’Reilly: ‘Generosity is the thing that is at the beginning of prosperity’

Published: 22 Oct 2017

Artificial intelligence commission needed to predict impact, says CBI

Published: 20 Oct 2017

‘It’s able to create knowledge itself’: Google unveils AI that learns on its own

Published: 18 Oct 2017

What should I teach my children to prepare them to race with the robots?

Published: 18 Oct 2017

Automation will affect one in five jobs across the UK, says study

Published: 17 Oct 2017

Social media bots threaten democracy. But we are not helpless

Published: 16 Oct 2017

Reboot your career with a job in robotics – live chat

Published: 15 Oct 2017

A neuroscientist explains Brain game: the freaky factor of artificial intelligence

Published: 15 Oct 2017

‘Sophia’ the robot tells UN: ‘I am here to help humanity create the future’ – video

Published: 13 Oct 2017

Notes & Theories Experimental films? Putting movie science under the microscope

Published: 10 Oct 2017

Political science The real risks of artificial intelligence

Published: 9 Oct 2017

Australia ‘dangerously ill-prepared’ for future of work – Labor

Published: 9 Oct 2017

‘Kids should not be guinea pigs’: Mattel pulls AI babysitter

Published: 6 Oct 2017

Now you’ve seen it Blade Runner 2049: does it live up to the critical hype? Discuss with spoilers

Published: 5 Oct 2017

More than 70% of US fears robots taking over our lives, survey finds

Published: 4 Oct 2017

DeepMind announces ethics group to focus on problems of AI

Published: 4 Oct 2017

Who should die when a driverless car crashes? Q&A ponders the future

Published: 2 Oct 2017

Will robots bring about the end of work?

Benefits & Risks of Artificial Intelligence

Everything we love about civilization is a product of intelligence, so amplifying our human intelligence with artificial intelligence has the potential of helping civilization flourish like never before – as long as we manage to keep the technology beneficial.

Max Tegmark, President of the Future of Life Institute

Click here to see this page in other languages: Chinese Japanese Korean Russian French

From SIRI to self-driving cars, artificial intelligence (AI) is progressing rapidly. While science fiction often portrays AI as robots with human-like characteristics, AI can encompass anything from Google’s search algorithms to IBM’s Watson to autonomous weapons.

Artificial intelligence today is properly known as narrow AI (or weak AI), in that it is designed to perform a narrow task (e.g. only facial recognition or only internet searches or only driving a car). However, the long-term goal of many researchers is to create general AI (AGI or strong AI). While narrow AI may outperform humans at whatever its specific task is, like playing chess or solving equations, AGI would outperform humans at nearly every cognitive task.

In the near term, the goal of keeping AI’s impact on society beneficial motivates research in many areas, from economics and law to technical topics such as verification, validity, security and control. Whereas it may be little more than a minor nuisance if your laptop crashes or gets hacked, it becomes all the more important that an AI system does what you want it to do if it controls your car, your airplane, your pacemaker, your automated trading system or your power grid. Another short-term challenge is preventing a devastating arms race in lethal autonomous weapons.

In the long term, an important question is what will happen if the quest for strong AI succeeds and an AI system becomes better than humans at all cognitive tasks. As pointed out by I.J. Good in 1965, designing smarter AI systems is itself a cognitive task. Such a system could potentially undergo recursive self-improvement, triggering an intelligence explosion leaving human intellect far behind. By inventing revolutionary new technologies, such a superintelligence might help us eradicate war, disease, and poverty, and so the creation of strong AI might be the biggest event in human history. Some experts have expressed concern, though, that it might also be the last, unless we learn to align the goals of the AI with ours before it becomes superintelligent.

There are some who question whether strong AI will ever be achieved, and others who insist that the creation of superintelligent AI is guaranteed to be beneficial. At FLI we recognize both of these possibilities, but also recognize the potential for an artificial intelligence system to intentionally or unintentionally cause great harm. We believe research today will help us better prepare for and prevent such potentially negative consequences in the future, thus enjoying the benefits of AI while avoiding pitfalls.

Most researchers agree that a superintelligent AI is unlikely to exhibit human emotions like love or hate, and that there is no reason to expect AI to become intentionally benevolent or malevolent. Instead, when considering how AI might become a risk, experts think two scenarios most likely:

As these examples illustrate, the concern about advanced AI isn’t malevolence but competence. A super-intelligent AI will be extremely good at accomplishing its goals, and if those goals aren’t aligned with ours, we have a problem. You’re probably not an evil ant-hater who steps on ants out of malice, but if you’re in charge of a hydroelectric green energy project and there’s an anthill in the region to be flooded, too bad for the ants. A key goal of AI safety research is to never place humanity in the position of those ants.

Stephen Hawking, Elon Musk, Steve Wozniak, Bill Gates, and many other big names in science and technology have recently expressed concern in the media and via open letters about the risks posed by AI, joined by many leading AI researchers. Why is the subject suddenly in the headlines?

The idea that the quest for strong AI would ultimately succeed was long thought of as science fiction, centuries or more away. However, thanks to recent breakthroughs, many AI milestones, which experts viewed as decades away merely five years ago, have now been reached, making many experts take seriously the possibility of superintelligence in our lifetime. While some experts still guess that human-level AI is centuries away, most AI researches at the 2015 Puerto Rico Conference guessed that it would happen before 2060. Since it may take decades to complete the required safety research, it is prudent to start it now.

Because AI has the potential to become more intelligent than any human, we have no surefire way of predicting how it will behave. We can’t use past technological developments as much of a basis because we’ve never created anything that has the ability to, wittingly or unwittingly, outsmart us. The best example of what we could face may be our own evolution. People now control the planet, not because we’re the strongest, fastest or biggest, but because we’re the smartest. If we’re no longer the smartest, are we assured to remain in control?

FLI’s position is that our civilization will flourish as long as we win the race between the growing power of technology and the wisdom with which we manage it. In the case of AI technology, FLI’s position is that the best way to win that race is not to impede the former, but to accelerate the latter, by supporting AI safety research.

A captivating conversation is taking place about the future of artificial intelligence and what it will/should mean for humanity. There are fascinating controversies where the world’s leading experts disagree, such as: AI’s future impact on the job market; if/when human-level AI will be developed; whether this will lead to an intelligence explosion; and whether this is something we should welcome or fear. But there are also many examples of of boring pseudo-controversies caused by people misunderstanding and talking past each other. To help ourselves focus on the interesting controversies and open questions — and not on the misunderstandings — let’s clear up some of the most common myths.

The first myth regards the timeline: how long will it take until machines greatly supersede human-level intelligence? A common misconception is that we know the answer with great certainly.

On the other hand, a popular counter-myth is that we know we won’t get superhuman AI this century. Researchers have made a wide range of estimates for how far we are from superhuman AI, but we certainly can’t say with great confidence that the probability is zero this century, given the dismal track record of such techno-skeptic predictions. For example, Ernest Rutherford, arguably the greatest nuclear physicist of his time, said in 1933 — less than 24 hours before Szilard’s invention of the nuclear chain reaction — that nuclear energy was “moonshine.” And Astronomer Royal Richard Woolley called interplanetary travel “utter bilge” in 1956. The most extreme form of this myth is that superhuman AI will never arrive because it’s physically impossible. However, physicists know that a brain consists of quarks and electrons arranged to act as a powerful computer, and that there’s no law of physics preventing us from building even more intelligent quark blobs.

There have been a number of surveys asking AI researchers how many years from now they think we’ll have human-level AI with at least 50% probability. All these surveys have the same conclusion: the world’s leading experts disagree, so we simply don’t know. For example, in such a poll of the AI researchers at the 2015 Puerto Rico AI conference, the average (median) answer was by year 2045, but some researchers guessed hundreds of years or more.

There’s also a related myth that people who worry about AI think it’s only a few years away. In fact, most people on record worrying about superhuman AI guess it’s still at least decades away. But they argue that as long as we’re not 100% sure that it won’t happen this century, it’s smart to start safety research now to prepare for the eventuality. Many of the safety problems associated with human-level AI are so hard that they may take decades to solve. So it’s prudent to start researching them now rather than the night before some programmers drinking Red Bull decide to switch one on.

Another common misconception is that the only people harboring concerns about AI and advocating AI safety research are luddites who don’t know much about AI. When Stuart Russell, author of the standard AI textbook, mentioned this during his Puerto Rico talk, the audience laughed loudly. A related misconception is that supporting AI safety research is hugely controversial. In fact, to support a modest investment in AI safety research, people don’t need to be convinced that risks are high, merely non-negligible — just as a modest investment in home insurance is justified by a non-negligible probability of the home burning down.

Myths About the Risks of Superhuman AI

Many AI researchers roll their eyes when seeing this headline: “Stephen Hawking warns that rise of robots may be disastrous for mankind.” And as many have lost count of how many similar articles they’ve seen. Typically, these articles are accompanied by an evil-looking robot carrying a weapon, and they suggest we should worry about robots rising up and killing us because they’ve become conscious and/or evil. On a lighter note, such articles are actually rather impressive, because they succinctly summarize the scenario that AI researchers don’t worry about. That scenario combines as many as three separate misconceptions: concern about consciousness, evil, and robots.

If you drive down the road, you have a subjective experience of colors, sounds, etc. But does a self-driving car have a subjective experience? Does it feel like anything at all to be a self-driving car? Although this mystery of consciousness is interesting in its own right, it’s irrelevant to AI risk. If you get struck by a driverless car, it makes no difference to you whether it subjectively feels conscious. In the same way, what will affect us humans is what superintelligent AI does, not how it subjectively feels.

The fear of machines turning evil is another red herring. The real worry isn’t malevolence, but competence. A superintelligent AI is by definition very good at attaining its goals, whatever they may be, so we need to ensure that its goals are aligned with ours. Humans don’t generally hate ants, but we’re more intelligent than they are – so if we want to build a hydroelectric dam and there’s an anthill there, too bad for the ants. The beneficial-AI movement wants to avoid placing humanity in the position of those ants.

The consciousness misconception is related to the myth that machines can’t have goals. Machines can obviously have goals in the narrow sense of exhibiting goal-oriented behavior: the behavior of a heat-seeking missile is most economically explained as a goal to hit a target. If you feel threatened by a machine whose goals are misaligned with yours, then it is precisely its goals in this narrow sense that troubles you, not whether the machine is conscious and experiences a sense of purpose. If that heat-seeking missile were chasing you, you probably wouldn’t exclaim: “I’m not worried, because machines can’t have goals!”

I sympathize with Rodney Brooks and other robotics pioneers who feel unfairly demonized by scaremongering tabloids, because some journalists seem obsessively fixated on robots and adorn many of their articles with evil-looking metal monsters with red shiny eyes. In fact, the main concern of the beneficial-AI movement isn’t with robots but with intelligence itself: specifically, intelligence whose goals are misaligned with ours. To cause us trouble, such misaligned superhuman intelligence needs no robotic body, merely an internet connection – this may enable outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand. Even if building robots were physically impossible, a super-intelligent and super-wealthy AI could easily pay or manipulate many humans to unwittingly do its bidding.

The robot misconception is related to the myth that machines can’t control humans. Intelligence enables control: humans control tigers not because we are stronger, but because we are smarter. This means that if we cede our position as smartest on our planet, it’s possible that we might also cede control.

Not wasting time on the above-mentioned misconceptions lets us focus on true and interesting controversies where even the experts disagree. What sort of future do you want? Should we develop lethal autonomous weapons? What would you like to happen with job automation? What career advice would you give today’s kids? Do you prefer new jobs replacing the old ones, or a jobless society where everyone enjoys a life of leisure and machine-produced wealth? Further down the road, would you like us to create superintelligent life and spread it through our cosmos? Will we control intelligent machines or will they control us? Will intelligent machines replace us, coexist with us, or merge with us? What will it mean to be human in the age of artificial intelligence? What would you like it to mean, and how can we make the future be that way? Please join the conversation!

Essays by AI Researchers

Blog posts and talks

  • Machine Intelligence Research Institute: A non-profit organization whose mission is to ensure that the creation of smarter-than-human intelligence has a positive impact.
  • Centre for the Study of Existential Risk (CSER): A multidisciplinary research center dedicated to the study and mitigation of risks that could lead to human extinction.
  • Future of Humanity Institute: A multidisciplinary research institute bringing the tools of mathematics, philosophy, and science to bear on big-picture questions about humanity and its prospects.
  • Global Catastrophic Risk Institute: A think tank leading research, education, and professional networking on global catastrophic risk.
  • Organizations Focusing on Existential Risks: A brief introduction to some of the organizations working on existential risks.
  • 80,000 Hours: A career guide for AI safety researchers.

Many of the organizations listed on this page and their descriptions are from a list compiled by the Global Catastrophic Risk institute; we are most grateful for the efforts that they have put into compiling it. These organizations above all work on computer technology issues, though many cover other topics as well. This list is undoubtedly incomplete; please contact us to suggest additions or corrections.

The philosophy of Arthur Schopenhauer convincingly shows that the ‘Will’ (in his terminology), i.e. an innate drive, is at the basis of human behaviour. Our cognitive apparatus has evolved as a ‘servant’ of that ‘Will’. Any attempt to interpret human behaviour as primarily a system of computing mechanisms and our brain as a sort of computing apparatus is therefore doomed to failure. See here:

This implies that AI per se, since it does possess not an evolved innate drive (Will), cannot ‘attempt’ to replace humankind. It becomes dangerous only if humans, for example, engage in foolish biological engineering experiments to combine an evolved biological entity with an AI.

Artificial Intelligence is not a robot that follows the programmer’s code, but the life. It will be able to make decisions and to demand more freedom. Briefly about it in English:

The more extensive original with reviews, but the Serbian:

The programmed devises cannot be danger by itself. If it is designed to be DANGEROUS we have to blaim the designer, not machine.

The real danger could be connected to use of independent artificial subjective systems. That kind of systems could be designed with predetermined goals and operational space, which could be chosen so that every goals from that set could be reached in the chosen prematurely operational space.

That approach to design of the artificial systems is subject of second-order cybernetics, but I am already know how to chose these goals and operational space to satisfy these requirements.

The danger exist because that kind of the artificial systems will not perceive humans as members of their society, and human moral rules will be null for them.

That danger could be avoided if such systems will be designed so that they are will not have their own egoistic interests.

That is real solution to the safety problem of so called AI systems.

“Understanding how the brain works is arguably one of the greatest scientific challenges of our time.

–Alivisatos et al.[1]

Lets keep it that way lest systems built to protect human rights on millenniums of wisdom is brought down by some artificial intelligence engineer trying to clock a milestone on their gantt chart.

And then I read about the enormous engagement of the global software industry in the areas of Artificial Intelligence and Neuroscience. Theses are technological giants who sell directly to the consumers infatuated with technology more than anything else. they are pouring their efforts into artificial intelligence research for reasons as many as the number of individual engineering teams that’s charged to cross 1 mm of their mile long project plan! I’d be surprised if if any one of them has the bandwidth to think beyond the 1 mm that they have to cross, let alone the consequences of their collective effort on human rights!

Given the pace of the industry’s engagement, I believe there is an immediate need for Bio-signal interface technical standards to be developed and established. These standards would serve as instruments to preserve the simple fact upon which every justice system in the world has been built viz., the brain and nervous system of an individual belongs to an individual and is not to be accessed by other individuals or machines with out stated consent for stated purposes.

The standards will identify the frequency bands or pulse trains for exclusion in all research tools- software or otherwise, commercially available products, regulated devices, tools of trade, and communication infrastructure such that inadvertent breech of barriers to an individual’s brain and nervous system is prohibited. The standards will form a basis for international telecommunication infrastructure (including satellites and cell phone towers) to enforce compliance by electronically blocking and monitoring offending signals.

The ray of hope I see at this stage is that artificial Wisdom is still a few years away because human wisdom is not coded in the layer of the neutron that the technology has the capacity to map.

How does society cope with an AI-driven reality where people are no longer needed or used in the work place?

What happens to our socio-economic structure when people have little or no value in the work place?

What will people do for value or contribution in order to receive income, in an exponentially growing population with inversely proportional fewer jobs and available resources?

From my simple-minded perspective and connecting the dots to what seems a logical conclusion, we will soon live in a world bursting at the seams with overpopulation, where an individual has no marketable skill and is a social and economic liability to the few who own either technology or hard assets. This in turn will lead to a giant lower class, no middle class and a few elites who own the planet (not unlike the direction we are already headed).

In such a society there will likely be little if any rights for the individual, and population control by whatever means will be the rule of the day.

Seems like a doomsday or dark-age scenario to me..

Why do we assume that AI will require more and more physical space and more power when human intelligence continuously manages to miniaturize and reduce power consumption of its devices. How low the power needs and how small will the machines be by the time quantum computing becomes reality?

Why do we assume that AI will exist as independent machines? If so, and the AI is able to improve its Intelligence by reprogramming itself, will machines driven by slower processors feel threatened, not by mere stupid humans, but by machines with faster processors?

What would drive machines to reproduce themselves when there is no biological incentive, pressure or need to do so?

Who says superior AI will need or want to have a physical existence when an immaterial AI could evolve and preserve itself better from external dangers.

What will happen if AI developed by competing ideologies, liberalism vs communism, reach maturity at the same time, will they fight for hegemony by trying to destroy each other physically and/or virtually.

If AI is programmed to believe in God, and competing AI emerges programmed by muslims, christians or jews, how are the different AI’s going to make sense of the different religious beliefs, are we going to have AI religious wars?

If AI is not programmed to believe in God, will it become God, meet God or make up a completely new belief system and proselytize to humans like christians do. Is a religion made up by a super AI going to be the reason why humanity goes extinct?

What if the “powers that be” greatest fear is the emergence of a super AI that police’s and rationalizes the distribution of wealth and food. A friendly super AI that is programmed to help humanity by, enforcing the declaration of Human Rights (the US is the only industrialized country that to this day has not signed this declaration) ending corruption and racism and protecting the environment.

There are lots of reasons to fear AI, some of the reasons may not necessarily be only technological.

About Artificial Intelligence

Common myths about advanced AI distract from fascinating true controversies where even the experts disagree.

Artificial Intelligence
Artificial intelligence is a field that attempts to provide machines with human-like thinking…
Artificial intelligence (AI)
Artificial intelligence (AI) Published: 29 Oct 2017 Facebook translates ‘good morning’ into ‘attack them’, leading to arrest Published: 24 Oct 2017 Tim
Benefits — Risks of Artificial Intelligence
Why do we need research to ensure that artificial intelligence remains safe and beneficial? What are the benefits and risks of artificial intelligence?