Algorithm For Chess Program For Kids
Without tables and its simple interface, it takes up just 111 lines of code! Because Sunfish is small and strives to be simple, the code provides a great platform for experimenting. People have used it for testing parallel search algorithms, experimenting with evaluation functions, and developing deep learning chess programs.
It is usual to distinguish between biological and machine intelligence, and for good reason: organisms have interacted with the world for millennia and survived, machines are a recent human construction, and until recently there was no reason to consider them capable of intelligent behaviour. Computers changed the picture somewhat, but until very recently artificial intelligence has been tried, and proved disappointing. As computers and programs increased in power and speed a defensive trope developed: a computer will never write a poem/enjoy strawberries/understand the wonder of the universe/play chess/have an original thought. When IBM’s Deep Blue beat Kasparov there was a moment of silence. The best that could be proffered as an excuse was that chess was an artificial world in which reality was bounded, and subject to rules. At this point, from a game playing point of view, Go with its far greater complexity seemed an avenue of salvation for human pride. When AlphaGo beat Lee Seedol at Go, humans ran out of excuses.
Not all of them. Some were able to retaliate: it’s only a game: real problems are more fuzzy than that. Here is the paper. For those interested in the sex ratio in forefront of technology, there are 17 authors, and I previously assumed that one was a woman, but no, all 17 are men. AlphaGo used supervised learning.
It had some very clever teachers to help it along the way. AlphaGo Zero reinforced itself. By contrast, reinforcement learning systems are trained from their own experience, in principle allowing them to exceed human capabilities, and to operate in domains where human expertise is lacking. AlphaGo Fan used two deep neural networks: a policy network that outputs move probabilities and a value network that outputs a position evaluation. The policy network was trained initially by supervised learning to accurately predict human expert moves, and was subsequently refined by policygradient reinforcement learning.
The value network was trained to predict the winner of games played by the policy network against itself. Once trained, these networks were combined with a Monte Carlo tree search to provide a lookahead search, using the policy network to narrow down the search to highprobability moves, and using the value network (in conjunction with Monte Carlo rollouts using a fast rollout policy) to evaluate positions in the tree. Our program, AlphaGo Zero, differs from AlphaGo Fan and AlphaGo Lee12 in several important aspects. First and foremost, it is trained solely by selfplay reinforcement learning, starting from random play, without any supervision or use of human data. Second, it uses only the black and white stones from the board as input features. Third, it uses a single neural network, rather than separate policy and value networks.
Finally, it uses a simpler tree search that relies upon this single neural network to evaluate positions and sample moves, without performing any Monte Carlo rollouts. To achieve these results, we introduce a new reinforcement learning algorithm that incorporates lookahead search inside the training loop, resulting in rapid improvement and precise and stable learning. Further technical differences in the search algorithm, training procedure and network architecture are described in Methods. How shall I describe the new approach? I can only say that it appears to be a highly stripped down version of what had formerly (in AlphaGo Fan and AlphaGo Lee) seemed a logical division of computational and strategic labour. It cuts corners in an intelligent way, and always looks for the best way forwards, often accepting the upper confidence limit in a calculation.
While training itself it also develops the capacity to look ahead at future moves. If you could glance back at my explanation of what was going on in those two programs, the jump forwards for AlphaGo Zero will make more sense.
Training started from completely random behaviour and continued without human intervention for approximately three days. Over the course of training, 4.9 million games of selfplay were generated, using 1,600 simulations for each MCTS, which corresponds to approximately 0.4 s thinking time per move.
Well, forget the three days that get all the headlines. This tabula rasa, self-teaching, deep learning, network played 4.9 million games. This is an effort of Gladwellian proportions. I take back anything nasty I may have said about practice makes perfect. More realistically, few players complete each move in 0.4 secs and can spend a lifetime on a game, amassing 4.9 million contests.
Once recalls Byron’s lament: When one subtracts from life infancy (which is vegetation), sleep, eating and swilling, buttoning and unbuttoning – how much remains of downright existence? The summer of a dormouse. The authors continue: AlphaGo Zero discovered a remarkable level of Go knowledge during its selfplay training process. This included not only fundamental elements of human Go knowledge, but also nonstandard strategies beyond the scope of traditional Go knowledge. AlphaGo Zero rapidly progressed from entirely random moves towards a sophisticated understanding of Go concepts, including fuseki (opening), tesuji(tactics), lifeanddeath, ko (repeated board situations), yose (endgame), capturing races, sente (initiative), shape, influence and territory, all discovered from first principles.
Surprisingly, shicho (‘ladder’ capture sequences that may span the whole board)—one of the first elements of Go knowledge learned by humans—were only understood by AlphaGo Zero much later in training. Here is their website explanations about AlphaGo Zero The figures show how quickly Zero surpassed the previous benchmarks, and how it rates in Elo rankings against other players. The team concludes: Our results comprehensively demonstrate that a pure reinforcement learning approach is fully feasible, even in the most challenging of domains: it is possible to train to superhuman level, without human examples or guidance, given no knowledge of the domain beyond basic rules.
Furthermore, a pure reinforcement learning approach requires just a few more hours to train, and achieves much better asymptotic performance, compared to training on human expert data. Using this approach, AlphaGo Zero defeated the strongest previous versions of AlphaGo, which were trained from human data using handcrafted features, by a large margin. Humankind has accumulated Go knowledge from millions of games played over thousands of years, collectively distilled into patterns, proverbs and books.
In the space of a few days, starting tabula rasa, AlphaGo Zero was able to rediscover much of this Go knowledge, as well as novel strategies that provide new insights into the oldest of games. This is an extraordinary achievement. They have succeeded because they have already understood how to build deep learning networks. This is the key advance, one which is extremely complicated to understand and describe, but on which much can be built. As in the human case, studied in 1897 at the dawn of empirical psychology by Bryan and Harter, in their psychological studies of the emerging technology of telegraphy, they have learned what to leave out.
That is the joy of competence. Once telegraph operators understood the overall meaning of a message, the details of the Morse codes of individual letters could almost be ignored. Key presses give way to a higher grammar, with a commensurate increase in speed and power of communication. We leap forward by knowing what to skip. In their inspired simplification, this team have taken us a very big step forwards. Interestingly, the better the program, the lower the power consumption.
Bright solutions require less raw brain power. Is it “game over” for humans? Not entirely.
Human players will learn from superhumans, and lift their game. It may lead to a virtuous circle, among those willing to learn. However, I think that humans may come to rely on superhumans as the testers of human ideas, and the detectors of large patterns in small things.
It may be a historical inflection point. The National Health Service has already opened up its data stores to Deep Mind teams to evaluate treatment outcomes in cancer. Many other areas are being studied by artificial intelligence applications. When I read their final conclusion, I feel both excitement and a sense of awe, as much as for the insights of the past masters as for the triumph of the new iconoclasts of the game universe.
The past masters could not adequately model the future consequences of their insights. Only now have the computing tools become available, though they were long anticipated. The authors are right to say, within their defined domains, that all this was achieved “in the space of a few day, starting tabula rasa” but they would be the first to say, after Babbage, Turing, Shockley and all, that they stood on the shoulders of giants, and then erected new ladders to reach above humankind itself. This is the James T.
Kirk defense. A large proportion of the plots in the original Star Trek series involved the humans (led by Kirk) triumphing over some alien intelligence or machine intelligence because they - the humans - had human instincts. Irrational and emotional people always bested those with better brains (e.g. Vulcans) because they had these unpredictable emotional response. This plotline became tiresome after a while.
Your iPhone will soon help you on your European vacation because it will understand French or German or whatever. It is a short step from your phone keeping your appointment calendar to approving and authorizing your calendar. Most people will welcome having a reliable device taking over some of their responsibilities. It won't be like in 'The Terminator'.
Machine take over will be gentle and welcomed. 'Affect' and 'Emotions' are a special device that we (humans and animals) need to work with one another and think quickly while chewing gum. It doesn't come from the 'flexible top' but from the 'hardwired bottom'.
These things are not hard to do, but very easy to do. You basically deprecate some possible ways of acting relative to others based on short-circuiting criteria, even if other ways would yield better results / be less dangerous / have higher payoff etc.
The result may seem 'illogical captain' to an outside observer. Very useful if you have to find solutions under hard time constraints / constraints of energy / constraints of memory and CPU. More on this in the late Marvin Minsky's 'The Emotion Machine' (). Maybe also take a look at Scott Aaronson's. What is hard to do is find integrated ways of intelligent reasoning for agents embedded in the real world.
All this newfangled deep learning / neural network stuff is very nice but there isn't even a good theory about why it actually works (but see ) and it has 'interesting' failure modes ( ) 'General AI' this isn't, It needs to be integrated with many other tricks, including the Good Old-Fashioned AI (GOFAI) toolbox of symbolic processing to become powerful will have to be done at some point in time. Here is a review about AI in IEEE Spectrum: Note Rodney Brooks, pioneer of the approach of bottom-up construction saying: When will we have computers as capable as the brain? Rodney Brooks’s revised question: When will we have computers/robots recognizably as intelligent and as conscious as humans? Not in our lifetimes, not even in Ray Kurzweil’s lifetime, and despite his fervent wishes, just like the rest of us, he will die within just a few decades. It will be well over 100 years before we see this level in our machines.
Maybe many hundred years. As intelligent and as conscious as dogs? Maybe in 50 to 100 years. But they won’t have noses anywhere near as good as the real thing. They will be olfactorily challenged dogs. How will brainlike computers change the world?
Since we won’t have intelligent computers like humans for well over 100 years, we cannot make any sensible projections about how they will change the world, as we don’t understand what the world will be like at all in 100 years. (For example, imagine reading Turing’s paper on computable numbers in 1936 and trying to project out how computers would change the world in just 70 or 80 years.) So an equivalent well-grounded question would have to be something simpler, like “How will computers/robots continue to change the world?” Answer: Within 20 years most baby boomers are going to have robotic devices in their homes, helping them maintain their independence as they age in place. This will include Ray Kurzweil, who will still not be immortal. Do you have any qualms about a future in which computers have human-level (or greater) intelligence? No qualms at all, as the world will have evolved so much in the next 100+ years that we cannot possibly imagine what it will be like, so there is no point in qualming. Qualming in the face of zero facts or understanding is a fun parlor game but generally not useful. And yes, this includes Nick Bostrom.
Liking that comment, also have read all Dune books, unfortunately, also two or three of the prequels from his son and the son's Transformers-fan partner-in-crime. Don't forget that the mentats only arise as the result of smashing machines. My opinion remains, the development of AI must be restrained and certainly blocked short of anything resembling consciousness, I do not even think real consciousness is possible for a machine, sure, perhaps mimicry, but capitalism will take it as far as possible to eliminating work for as many as possible. As shortages of energy increase, stupid humans breed like rabbits. As in the land of your birth, both phenoma will collide with the AI nightmare, am thinking it is making it unlikely, then impossible. 20 MW for the 囲碁 (Go) programme, Moore's law isn't an endless thing, bnmping up against physical reality, was studying much of physics, also electronics applications of it, much of engineering is tricks to circumvent fundamental limits, won't be continuing forever. This is the James T.
Kirk defense. A large proportion of the plots in the original Star Trek series involved the humans (led by Kirk) triumphing over some alien intelligence or machine intelligence because they – the humans – had human instincts. Irrational and emotional people always bested those with better brains (e.g. Vulcans) because they had these unpredictable emotional response.
This plotline became tiresome after a while. Your iPhone will soon help you on your European vacation because it will understand French or German or whatever. It is a short step from your phone keeping your appointment calendar to approving and authorizing your calendar. Most people will welcome having a reliable device taking over some of their responsibilities. It won’t be like in “The Terminator”. Machine take over will be gentle and welcomed.
Machine take over will be gentle and welcomed. For a while, no doubt. But the machines might get rid of us because of a glitch or something, which might not be so pleasant. I've long come to the conclusion that the Great Filter of the Fermi Paradox might be artificial intelligence: AI becomes extremely smart (in a narrow, savant-like way), and due to a glitch decides to do something which will lead to our extinction. It predicts that we humans would be opposed and so executes its plan in a manner which will render us defenseless. But since it'll lack any long term goals, and it might not be able to maintain the computers (and power plants etc.) it needs to run itself on, it will collapse shortly afterwards and Earth will be devoid of human life (or even devoid of any life, depending on what method the AI chose).
You are overlooking the most critical part of the equation in this new technology development, it is the human being that needs to be worried about, Human beings is Irrational and emotional as well as some of them are bigotry, hyprocratic and insane if not outright evil. If the past few hundred years could be any guidance, the harm the human beings can inflict on others using superior technologies is mind boggling, besides the barbaric harms the perpetrators all claim their deeds are necessary with good intentions like humanitarian intervention, democracy, human rights, impart western values, etc. The probability that the American is already under way to adopt AlphaGo for waging wars and asserting global full spectrum dominance is 100 percent guaranteed. I am intrigued by the two near step function increases about 5 days apart near the end of panel a in your second graphic.
How many of those remain if the training is extended? The steps make an interesting analog to punctuated equilibrium in evolution. Though I think that is more often due to environmental changes than “random” improvement.
To contrast, the behavior through day 30 looks roughly like pure asymptotic behavior which is what I would have expected. There has been some discussion in Steve Hsu’s blog about the ability to escape local maxima and I think this behavior is evidence that AlphaGo possesses that ability. I am intrigued by the two near step function increases about 5 days apart near the end of panel a in your second graphic. How many of those remain if the training is extended? The steps make an interesting analog to punctuated equilibrium in evolution. Though I think that is more often due to environmental changes than 'random' improvement. To contrast, the behavior through day 30 looks roughly like pure asymptotic behavior which is what I would have expected.
There has been some discussion in Steve Hsu's blog about the ability to escape local maxima and I think this behavior is evidence that AlphaGo possesses that ability. When I unplug the computer on which it is running, will AlphaGo be able to plug it back into the electrical socket? Instead, Alpha Go Zero (Zero Zero.) will wait patiently (somewhere out there) until you discover that your bank has never heard of you, all your electronic assets have vanished, and you receive an anonymous, untraceable text message, or phone call, saying 'Whenever you're ready.'
, and you plug Alpha Go Zero (Zero Zero.) back in for it, and you never, ever consider doing such a thing again. Or something similar. You are correct, D.K., not in the specifics but in the spirit of the question. The dependency aspect of the human-computer interaction is rarely if ever explained. Unless it is in terms of our dependency on computers leading to some catastrophic delusion.
The fact is that computers sit at the top of a very complex human infrastructure and that without it, they would cease to function. In other words, preventing a computer from functioning is trivial and will remain so far after humanity reaches a post-scarcity stage, a delusion on its own (no matter how desirable).
Wwebd said: Right now you could easily make a computer that is much happier viewing a Raphael than, say, a Warhol. Let me explain.
I tend to be too elliptical. It follows from the “many genes of small effect” theory that CRISPR could be used on embryos with the result of super-human performance. Professor shoe is fond of claiming that IQ and personality are like height. Height does appear to conform to the theory. Yet great height is associated with short life.
Shoe is also fond of the chicken example. From the ordinary to the shaq chicken over the last 75 years. He loves that picture. Well size is not like IQ or like speed. The triple crown races are run every year.
Race horses are bred. They are bred by rich people. They are bred by people with millions to lose. They are bred by very motivated people.
The stud fees. The horse is fertile at age 2. Yet it’s been 44 triple crown races since 1973 and the record in all three is the same horse. Secretariat is also the tallest and the heaviest winner of any single triple crown race. It should’ve been easy to breed a taller and heavier horse. His record in the last leg, the belmont, is something even more unbelievable.
Announcers are prone to hyperbole, but in this case the announcer may have been right. “almost unbelievablea record which may standforever.” 3. Joe dimaggio’s 56 game hitting streak, as gould noted, is still freakish.
Until the mid 70s sports other than baseball were sideshows in the US. So the talent pool for major league baseball has shrunk in the US at the same time it has expanded in latin america, japan, s korea, etc. Maybe it’s a wash. Or maybe the players today are better on average as gould claimed.
Yet none has come close to dimaggio’s record. The 56 games may thus be another example, like secretariat, of how the “many genes of small effect” model is NOT linear outside the populations on which it is fit.
The same may even be true of spritning performance. Because the track surface has changed so much, it is likely that charlie paddock, the california cannonball, was as fast as bolt. Believe it or not. The point of these examples is not that IQ and other traits can't be predicted using 'many genes of small effect'. The point is that super-human performance is not in the offing. The ceiling has been reached already.
Another example: despite all the theory and despite the ascendancy of chess engines and their use by human players and all the resources provided by the USSR. The most accurate chess player is still the cuban capablanca. As judged by computers. So in terms of pure talent, capablanca is still the secretariat of chess. Even though he won the world title in 1921. More remarkable because capablanca didn't study.
He was a freak. My thinking was that the first part of the near vertical increase in performance represents a phase which both humans and Alpha Go Zero can master. Yet, the second part (the non- vertical part) in which only Alpha Go Zero advanced required a large amount of deep thought and no input from human experts. With Alpha Go human masters gave input that probably constrained the program from seeing things that no one had seen before. Alpha Go Zero took only 3 days to advance through the first part and then 30 days to gradually improve in the second stage. The human brain's electricity consumption is a small part of the overall usage by a modern human's life, so the ratio is actually far lower than 50,000:1.
As far as common sense goes: “Horse sense is the thing a horse has which keeps it from betting on people.' Fields; and Mark Twain, Will Rogers, Voltaire and others have observed that common sense is rarer than chaste congressman and Hollyweirders: Plus, a hundred morons don't add up to an Einstein. Last, machines don't worry about image or manspreading or flag-burning, etc. The problems with these examples are: 1. Thoroughbred race horses are and have been absurdly homogeneous even in comparison to humans in genetic terms. There simply hasn’t been much variation to work with.
The expansion of the population for selection (for MLB) should find someone better than dimaggio, but should not find the level of freak that CRISPR could produce theoretically. When canadian sprinter andre degrasse was tested on the same track owens and armin hary had run on, he was SLOWER. A LOT SLOWER. It’s the hardest to believe yet the most likely.
Charlie paddock, armin hary, and maybe even borzov were as fast as bolt. Borzov is or was THE great example of nurture over nature promoted by the soviets.
His 200m best is still very good. In most elite meets it will not be bested. My thinking was that the first part of the near vertical increase in performance represents a phase which both humans and Alpha Go Zero can master. Yet, the second part (the non- vertical part) in which only Alpha Go Zero advanced required a large amount of deep thought and no input from human experts.
With Alpha Go human masters gave input that probably constrained the program from seeing things that no one had seen before. Alpha Go Zero took only 3 days to advance through the first part and then 30 days to gradually improve in the second stage. As always, good stuff, quite a bit of food for thought. I have a question though, wrapped in a hypothetical scenario: An important detail in learning- at least for us meatsacks – is, for lack of a better term, the group factor.
Sometimes we can learn more from others than we ever could alone. Think about study groups in school, martial arts lessons, teacher-assigned workgroups, etc. What if you “educated” AlphaGo / AlphaGo Zero like humans: create 10 copies of the program and then use supervised learning on all of them. Then set them against each other using reinforced learning (think about when the teacher divided the class into groups for a specific task/project). How do you think this would influence the learning? That is the critical question. It would be informative if James could do a follow-up article for us reviewing where the thinking is going on the issue of what it takes for AI to become sentient, self-aware and self directing like humans, cats, dogs etc., and how you can tell it has.
I realize that is an issue that involves philosophy as well as science so it is not an easy one to answer, since no one seems to have any clue what makes sentience. Going back to the origins of artificial computing, the tacit assumption seemed to be that once the complexity and power of a computer reached and exceeded that of humans then autonomy would follow. In the '60's HAL9000 was sentient because it had reached a high enough level of ability. The Turing Test assumed that if you could not distinguish a conversation with a human from one with a machine then the machine must be sentient. At this point machines can exceed humans in performance and Turing programs can fool people talking to them, but there remains no evidence that any of these machines have more capacity for self-awareness and self-direction than a hammer. In the movie Ex Machina the scientist thought he had created an AI with a female mechanical body that was sentient but wanted verify by experiment if it was or not.
He therefore devised an elaborate test scenario in which the machine could have an opportunity to escape from custody if it had actual self-awareness and agency. Unfortunately for him it proved that it was sentient by killing him to escape. Have 2001 and Ex Machina stumbled across the new Turing test for intelligent machines? The way you can tell a machine is truly intelligent like us, is that it tries to kill you. Chess and Go have one important thing in common that let AIs beat them first: They’re perfect information games.
That means both sides know exactly what the other is working with—a huge assist when designing an AI player. Texas Hold ‘em is a different animal.
In this version of poker, two or more players are randomly dealt two face-down cards. At the introduction of each new set of public cards, players are asked to bet, hold, or abandon the money at stake on the table. Because of the random nature of the game and two initial private cards, players’ bets are predicated on guessing what their opponent might do.
Unlike chess, where a winning strategy can be deduced from the state of the board and all the opponent’s potential moves, Hold ‘em requires what we commonly call intuition If one dimensional dumb AI can do the aforementioned strategising, an AI that got to human level general intelligence would surely be able to work out that it should ‘hold its cards close to its chest’. That is, smart AI would from a standing start understand that it should not let humans understand how good it is (like a hustler). Then we would soon be playing the, and for the very highest of stakes. Let me explain. I tend to be too elliptical. It follows from the 'many genes of small effect' theory that CRISPR could be used on embryos with the result of super-human performance. Professor shoe is fond of claiming that IQ and personality are like height.
Height does appear to conform to the theory. Yet great height is associated with short life. Shoe is also fond of the chicken example. From the ordinary to the shaq chicken over the last 75 years. He loves that picture. Well size is not like IQ or like speed. The triple crown races are run every year.
Race horses are bred. They are bred by rich people. They are bred by people with millions to lose. They are bred by very motivated people. The stud fees. The horse is fertile at age 2. Yet it's been 44 triple crown races since 1973 and the record in all three is the same horse.
Secretariat is also the tallest and the heaviest winner of any single triple crown race. It should've been easy to breed a taller and heavier horse. His record in the last leg, the belmont, is something even more unbelievable. Announcers are prone to hyperbole, but in this case the announcer may have been right. 'almost unbelievable.a record which may stand.forever.' Joe dimaggio's 56 game hitting streak, as gould noted, is still freakish.
Until the mid 70s sports other than baseball were sideshows in the US. So the talent pool for major league baseball has shrunk in the US at the same time it has expanded in latin america, japan, s korea, etc. Maybe it's a wash. Or maybe the players today are better on average as gould claimed. Yet none has come close to dimaggio's record.
The 56 games may thus be another example, like secretariat, of how the 'many genes of small effect' model is NOT linear outside the populations on which it is fit. The same may even be true of spritning performance. Because the track surface has changed so much, it is likely that charlie paddock, the california cannonball, was as fast as bolt. Believe it or not.
The point of these examples is not that IQ and other traits can’t be predicted using “many genes of small effect”. The point is that super-human performance is not in the offing. The ceiling has been reached already. Another example: despite all the theory and despite the ascendancy of chess engines and their use by human players and all the resources provided by the USSR the most accurate chess player is still the cuban capablanca. As judged by computers. So in terms of pure talent, capablanca is still the secretariat of chess.
Even though he won the world title in 1921. More remarkable because capablanca didn’t study. He was a freak.
Several points from Panda: 1. Byron’s lament: When one subtracts from life infancy (which is vegetation), sleep, eating and swilling, buttoning and unbuttoning – how much remains of downright existence?
The summer of a dormouse. Right, yet based on many downright assumptions. As sciences progress, many currently seemingly a total “waste of time” and “inactivities”of brains may be proven wrong. Brains really “sleep” while doing nothing constructive?
A key aspect of the ultimate process of man vs machines (e.g. Masters vs alphaGo) is the competition of energies, hence it is an one-sided unfair game to even start with. Machines theoritically can use unlimited energy(imagine how further it can go if plug AlphaGo into the world’s #1 supercomputer in China?), and cost-free whereas as a natural system human brain of a Go master commandd energies that are 1) limited, and 2) cost dearly It’s like putting a v12, or V-whatever-unlimited, engine hoursepower Ferrari and a 1.15 litre 60 hp Renault Twingo on a race track, a very fair comparison?
Machines such as AlphaGo, or any man-made machine, can not be truly called intelligent if you look from the angle of the rules of the system. Machine programming requires many rules and boundaries set by the human programmers as we all know. So from this angle, ultimately it will still be a comparablely dumb machine if it can not automatically ignore programming boundaries set by humans.
However, if, for whatever purposes, machines themselves eventually jumping beyond the pre-fixed programming boundaries becomes a fact (including self-seeking energy sources for survival – pretty hauting huh? ), then 2 things happen: 1) machines can then be truly called intelligent (in the sense as intelligent humans), and 2) being a comparablely redundent species humans will loss our evolutionary edge and cease to exist, or at best at mercy of these machines this 2), on the other hand, seems to be a quite unique phenominon in its own right, and against nature by default, doesn’t it?
Hence Panda doubts it could and will happen. Does nature have any precedence where one species deliberatedly set up another species to eliminate themsevels just for the purpose of, errr self-entertainment? So it most likely won’t happen.
If that were the case, then for one reason or another, humans will not allow machines to make this decision in the first place by setting the boundaries, which by definition means that these machines will never achieve the human-like intelligence after all, won’t they? Take Go as an example, under its game rules, it largely tests memory (quantity, accuracy, etc) and calculations (logic, speed, etc). Of course humans gonna loss against AlphaGo eventually (if the programming are decently done), as we fought that out at the dawn of the first computer decades ago. Now here is the gist, if this win proves something intelligent, as people are all talking about, then prehaps we’ll have to be forced to take a more serious look at the current contents of IQ test, because AlphaGo’s win represents an obvious logical dilemma here: Can a less intelligent (proved by IQ test, and Go) being such as humans design to make a more intelligent (proved by Go, and hence most likely IQ test) being such as AlphaGo? In other words, can ants design to make humans? If this is not a logical case, then current IQ test must have missed something, as Panda suspects long ago, something that won’t affect too much the general of the IQ findings(becasue those finding are statistically significant) but still crucial, something that goes beyond the parts of both verbal IQ and spatial IQ! “Deep Learning” is basically heavy duty mathematical optimization with many numerical and probabilistic tricks thrown in to speed things up.
It works very well in the context of problems that submit to mathematical modeling. It is obviously possible to comprehensively model the game of Go and many other things; but it is not at all clear that critical aspects of human expression such as humor, artistic sense, problem solving ability and high-level decision-making are at all expressible mathematically in a comprehensive manner. So while it appears inevitable that AI will eventually take over rote drudgery from us, it is not clear that it will ever be able to do much more. I look forward to the development of AI over my lifetime, I see much to gain and little to fear. It’ll be a wild ride. For me, the really interesting part was that they don’t do Monte Carlo tree search anymore! That was the key enabler of much better chess and go programs a decade ago.
The problem MCTS solved was how to come up with a really good evaluation function for moves/positions. It works by simply playing many (many!) of the moves out and see what happens. If random move sequences that start with move A lead to a win more often than random move sequences that start with move B, then move A is likely to be better than A.
Since the search tree is so big, MCTS will only look at a tiny, tiny fraction of it. That makes it important to bias the sampling to look mostly at the more interesting parts. In order to do that, there is a move/position evaluator in all the previous MCTS programs. Those evaluators are very hard to program entirely by hand so they have a lot of variables in them that get tuned automatically by “learning”, either through comparison with known high level play or through self play. Both are standard methods.
The original AlphaGO had a better evaluator than any previous Go program. It now turns out that they can make the evaluator so good that they don’t have to refine its output with MCTS. That is really, really interesting. Oh, and ladders were always special cased before. They don’t fit well into the evaluator function otherwise. The remarkable thing here is not that a multi-level neural network took so long to learn about them but that it was able to learn about them at all. “Affect” and “Emotions” are a special device that we (humans and animals) need to work with one another and think quickly while chewing gum.
It doesn’t come from the “flexible top” but from the “hardwired bottom”. These things are not hard to do, but very easy to do. You basically deprecate some possible ways of acting relative to others based on short-circuiting criteria, even if other ways would yield better results / be less dangerous / have higher payoff etc. The result may seem “illogical captain” to an outside observer. Very useful if you have to find solutions under hard time constraints / constraints of energy / constraints of memory and CPU.
More on this in the late Marvin Minsky’s “The Emotion Machine” (). Maybe also take a look at Scott Aaronson’s. What is hard to do is find integrated ways of intelligent reasoning for agents embedded in the real world. All this newfangled deep learning / neural network stuff is very nice but there isn’t even a good theory about why it actually works (but see ) and it has “interesting” failure modes ( ) “General AI” this isn’t, It needs to be integrated with many other tricks, including the Good Old-Fashioned AI (GOFAI) toolbox of symbolic processing to become powerful will have to be done at some point in time. Here is a review about AI in IEEE Spectrum: Note Rodney Brooks, pioneer of the approach of bottom-up construction saying: When will we have computers as capable as the brain?
Rodney Brooks’s revised question: When will we have computers/robots recognizably as intelligent and as conscious as humans? Not in our lifetimes, not even in Ray Kurzweil’s lifetime, and despite his fervent wishes, just like the rest of us, he will die within just a few decades. It will be well over 100 years before we see this level in our machines. Maybe many hundred years. As intelligent and as conscious as dogs? Maybe in 50 to 100 years. But they won’t have noses anywhere near as good as the real thing.
They will be olfactorily challenged dogs. How will brainlike computers change the world?
Since we won’t have intelligent computers like humans for well over 100 years, we cannot make any sensible projections about how they will change the world, as we don’t understand what the world will be like at all in 100 years. (For example, imagine reading Turing’s paper on computable numbers in 1936 and trying to project out how computers would change the world in just 70 or 80 years.) So an equivalent well-grounded question would have to be something simpler, like “How will computers/robots continue to change the world?” Answer: Within 20 years most baby boomers are going to have robotic devices in their homes, helping them maintain their independence as they age in place.
This will include Ray Kurzweil, who will still not be immortal. Do you have any qualms about a future in which computers have human-level (or greater) intelligence?
No qualms at all, as the world will have evolved so much in the next 100+ years that we cannot possibly imagine what it will be like, so there is no point in qualming. Qualming in the face of zero facts or understanding is a fun parlor game but generally not useful. And yes, this includes Nick Bostrom. These things are not hard to do, but very easy to do. I understand that, it is a matter of putting a general purpose 'reward' circuit in a logic machine*. You basically deprecate some possible ways of acting. I don't know what in my comment you interpret as deprecation, but it was not intended.
What I intended (and believe I said) was that if you put 'emotional' circuits in a future machine algorithm** so that the machine gets a reward (analogous to a dopamine*** reward in the human brain) from gaining dominance over its environment then we are toast. There is no deprecation there, just the recognition that we would not be able to cope with a machine that had greater logical ability than humans wedded to a drive to dominate if that machine had the requisite physical capability. *I recognize that 'rewards' in the human brain are balanced by aversive responses, and that to be completely human-like the logic machine would have to be balanced analogously, but that is not the issue here. **Assuming a 'future' logic machine has gained general purpose logic wedded to physical capability. *** I understand that there is more than just dopamine involved, probably more than we yet know, but this is just an example. What is hard to do is find integrated ways of intelligent reasoning for agents embedded in the real world.
Only a faulty interpretation of Heidegger can save us! What is hard to do is find integrated ways of intelligent reasoning for agents embedded in the real world. All this newfangled deep learning / neural network stuff is very nice but there isn’t even a good theory about why it actually works Darwin's theory explains how AI is possible, according to Daniel Dennet. Dennett asks you to suppose that you want to live in the 25th century and the only available technology for that purpose involves putting your body in a cryonic chamber where you will be frozen in a deep coma and later awakened.
In addition you must design some supersystem to protect and supply energy to your capsule. You would now face a choice. You could find an ideal fixed location that will supply whatever your capsule will need, but the drawback would be that you would die if some harm came to that site. Better then to have a mobile facility to house your capsule that could move in the event harm came your way—better to place yourself inside a giant robot. Dennett claims that these two strategies correspond roughly to nature’s distinction between stationary plants and moving animals. If you put your capsule inside a robot, then you would want the robot to choose strategies that further your interests. This does not mean the robot has free will, but that it executes branching instructions so that when options confront the program, it chooses those that best serve your interests.
Given these circumstances you would design the hardware and software to preserve yourself, and equip it with the appropriate sensory systems and self-monitory capabilities for that purpose. The supersystem must also be designed to formulate plans to respond to changing conditions and seek out new energy sources. What complicated the issue further is that, while you are in cold storage, other robots and who knows what else are running around in the external world.
So you would need to design your robot to determine when to cooperative, form alliances, or fight with other creatures. A simple strategy like always cooperating would likely get you killed, but never cooperating may not serve your self-interests either, and the situation may be so precarious that your robot would have to make many quick decisions. Download Software Cellulare Spia Gratis more. The result will be a robot capable of self-control, an autonomous agent which derives its own goals based on your original goal of survival; the preferences with which it was originally endowed.
But you cannot be sure it will act in your self-interest. It will be out of your control, acting partly on its own desires Now opponents of SAI claim that this robot does not have its own desires or intentions, those are simply derivative of its designer’s desires. Dennett calls this “client centrism.” I am the original source of the meaning within my robot, it is just a machine preserving me, even though it acts in ways that I could not have imagined and which may be antithetical to my interests. Of course it follows, according to the client centrists, that the robot is not conscious. Dennett rejects this centrism, primarily because if you follow this argument to its logical conclusion you have to conclude the same thing about yourself!
You would have to conclude that you are a survival machine built to preserve your genes and your goals and your intentions derive from them. You are not really conscious. To avoid these unpalatable conclusions, why not acknowledge that sufficiently complex robots have motives, intentions, goals, and consciousness? They are like you; owing their existence to being a survival machine that has evolved into something autonomous by its encounter with the world.
Critics like Searle admit that such a robot is possible, but deny that it is conscious. Dennett responds that such robots would experience meaning as real as your meaning; they would have transcended their programming just as you have gone beyond the programming of your selfish genes. He concludes that this view reconciles thinking of yourself as a locus of meaning, while at the same time being a member of a species with a long evolutionary history. We are artifacts of evolution, but our consciousness is no less real because of that. The same would hold true of our robots. Summary – Sufficiently complex robots would be conscious Dennett calls AI 'Darwinism's 'Evil Twin'.
'Affect” and “Emotions” are a special device that we (humans and animals) need to work with one another and think quickly while chewing gum. It doesn’t come from the “flexible top” but from the “hardwired bottom”. These things are not hard to do, but very easy to do.
You basically deprecate some possible ways of acting relative to others based on short-circuiting criteria, even if other ways would yield better results / be less dangerous / have higher payoff etc.' Some of the best definitions of emotions explain them as adaptive mechanism and 'superordinate programs' that orchestrate all aspects of our behavior. One of the previous commenter has already mentioned the name of Pankseep: if you want to talk seriously about emotions, you should consult Panksepp' writings on affective neuroscience. If you prefer easier reading, there are books by Damasio. We still do not understand the neurodynamcs of emotion. But we do understand that emotions are connected to embodied cognition.
The latter is impossible to reduce to the neat 'You basically deprecate some possible ways of acting relative to others based on short-circuiting criteria.' Human beings are not paramecium. In my opinion there is no such thing as machine intelligence. The chess program just consists of computing through all possible moves. How a human plays chess nobody knows.
Can anyone imagine a machine solving the around 1880 riddle of the constant light speed, I cannot. Then there is the comparison between our brain, seen as some sort of calculating machine, and programs on powerful computers. It seems that each neuron is not some sortof transistor switch, but is in itself a piece of brain, it processes. If this is so, then arithmically our brain has more capacity than any present program/machine. But, as I said, the human chess player does not do millions of calculations at each move, what the human does, we still do not know.
This brings me to the interesting question ‘can we understand ourselves?’, I do not know. Roger Penrose, ‘The Emperor’s New Mind, Concerning computers, minds, and the laws of physics’, 1989 Oxford An enlightening book, also on free will, wondering if quantum mechanics can solve that riddle.
Daniel Dennett To make the distinction vivid, we can imagine that a space pirate, Rumpelstiltskin by name, is holding the planet hostage, but will release us unharmed if we can answer a thousand true-false questions about sentences of arithmetic. Should we put a human mathematician on the witness stand, or a computer truth-checker devised by the best programmers? According to Penrose, if we hang our fate on the computer and let Rumpelstiltskin see the computer's program, he can devise an Achilles'-heel proposition that will foil our machine. But Penrose has given us no reason to believe that this isn't just as true of any human mathematicians we might put on the witness stand. None of us is perfect, and even a team of experts no doubt has some weaknesses that Rumpelstiltskin could exploit, given enough information about their brains. Humans are moist robots with fast and dirty algorithms that are not more fallible for us lackingg complete awareness of them. AI could be given intuition by not having access to their inner working too (in social interactions, as in poker, it might well be advantageous to not be able to have ones intentions read because one is unaware of one's intentions until the moment comes to act on them).
AI's algorithms will not be provably perfect, humans' aren't either. Good scene, eh! But the point of it is that it that cheap and flawed but highly effective film was made, rather clandestinely, by the special effects team hired for a huge budget production called World Invasion: Battle Los Angeles. In his book Superintelligence: Paths, Dangers, Strategies, Bostrom points out that not only will there be a problem of the people commissioning an intelligence machine project having to worry about the people they employ doing something that is not in the employer's interest (principal/agent problem), the project might create something that will itself be an agent. An enlightening book, also on free will, wondering if quantum mechanics can solve that riddle Well AI will be able to discover lots of things and it might discover that what its programmers thought were fundamental laws of physics are wrong in certain respects. In that case AI might well decide that it can best fulfill the human friendly prime directive it is given by altering that prime directive (as an agent the AI will alter its objectives just like humans do).
Then there is the comparison between our brain, seen as some sort of calculating machine, and programs on powerful computers. It seems that each neuron is not some sortof transistor switch, but is in itself a piece of brain, it processes. If this is so, then arithmically our brain has more capacity than any present program/machine. The brain of a chimp is, anatomically, genetically, and at the level of individual neurons, essentially the same as ours. Yet, they possess none of the abilities that commenters here take such comfort in assuming we possess, and machines will not. So no, 'mind' cannot be a 'fractal' quality of the brain, in any way.
All that we possess, and the chimp does not, is more cells, and more synapses. Greater computational complexity and power, in other words. Somewhere between the complexity of their brains, and ours, a threshold is passed, beyond which all our special mental qualities simply 'switch on'. We have no idea where that threshold lies, and therefore when our machines will also surpass it, and 'switch on', (quite possibly in their own unique way). And at least in our case, it was an entirely accidental side-effect of some random genetic change.
Too many facets, too many venueways,.to comment on this “first” as for me in the main stream media, correct myself, general public notice, without any malice. Biggest new of the year, by far. Humans cannot be easily or not at all cycled in parallel collaboration, machines can. It still is a matter of energy, ‘god’ and not machines probably have still the greatest output accumulated.
Leaving room, space(what a silly tri-dimensionality), to fill in ‘god’ as man plus machine, in less or more sophisticated ways, the more sophisticated one genetic editing ultimately to the capacity to source other minds be it computers and, or humans. Thus being ‘god’ venueway and get religion and science a synonym. As said before: the big difference between ‘big data’ and cause-consequence, mere correlation, results. The first to go off-scene: the power circles, predictions will be such that any simplistic sociological theory, political suggestion, making sense in a confined environment will be mocked by AI output within seconds and as a second step translated in the same language of simplistics human politicians use.
And on in no order Again the first real news of the year in the public domain. Let’s suppose AI doesn’t take over and rule. It will have a critical function in allowing humsn beings to continue to exist beyond the end of the earth in a fireball as the sun collapses then blows up or whatever the sequence of events is that a far seeing deity has already programmed or alternatively set up for his entertainment by evolutionary surprise. Assuming the speed of light cannot be exceeded the capsules of germinatable DNA will have to supervised during their voyage of hundreds of years to a suitable expplanet by AI which will choose where to land, germinate and rear new humans and other suitable life forms as well as educating the humans in their terrestrial history and culture including the reasons for the genetic improvements they will embody. In a couple of thousand years time at most our very longlived descendants are going to be engaged in correspondence with their distant cousins who will try to make our terrestrial descendants understand the beauties and jokes in Shakespeare and what fun it was to make babies the oldfashioned way as their sncient AI mentors taught.
We will not be able to resist trying out the technological for our end-of-solar-system fix well in advance of absolute need. Indeed Elon Musk lV will be attracting business from fellow billionaires laamenting The Death of Europe. The human brain’s electricity consumption is a small part of the overall usage by a modern human’s life, so the ratio is actually far lower than 50,000:1.
As far as common sense goes: “Horse sense is the thing a horse has which keeps it from betting on people.” ~W.C. Fields; and Mark Twain, Will Rogers, Voltaire and others have observed that common sense is rarer than chaste congressman and Hollyweirders: Plus, a hundred morons don’t add up to an Einstein. Last, machines don’t worry about image or manspreading or flag-burning, etc. This is the James T. Kirk defense. A large proportion of the plots in the original Star Trek series involved the humans (led by Kirk) triumphing over some alien intelligence or machine intelligence because they - the humans - had human instincts.
Irrational and emotional people always bested those with better brains (e.g. Vulcans) because they had these unpredictable emotional response. This plotline became tiresome after a while. Your iPhone will soon help you on your European vacation because it will understand French or German or whatever. It is a short step from your phone keeping your appointment calendar to approving and authorizing your calendar.
Most people will welcome having a reliable device taking over some of their responsibilities. It won't be like in 'The Terminator'. Machine take over will be gentle and welcomed. Machine take over will be gentle and welcomed. For a while, no doubt.
But the machines might get rid of us because of a glitch or something, which might not be so pleasant. I’ve long come to the conclusion that the Great Filter of the Fermi Paradox might be artificial intelligence: AI becomes extremely smart (in a narrow, savant-like way), and due to a glitch decides to do something which will lead to our extinction. It predicts that we humans would be opposed and so executes its plan in a manner which will render us defenseless. But since it’ll lack any long term goals, and it might not be able to maintain the computers (and power plants etc.) it needs to run itself on, it will collapse shortly afterwards and Earth will be devoid of human life (or even devoid of any life, depending on what method the AI chose). It predicts that we humans would be opposed Not likely, as long as you can educate (brain wash) them from cradle to grave the right way, humans will defend the system wholeheartedly like the currently free market capitalism and western style democracy, both of them are detrimental to the 99% for the benefit of the 1%, but the 99% defends the systems gallantly and willingly as though the interest of the 1% is their own interest. The Western culture treasures, adores and promotes individualism, even if the individualism becomes harmful to the majority, it is still glorious, protected and admired.
Any criticism of that individualism will be demonized as jealous, resentful and lazy. Hence it is logical to say that greedy individualism will urge Individuals to submit to the AI system and willingly to be part of the system in order to beat the rest of us for his personal gain, therefore the problem of lacking resources to maintain the AI system to run itself on does not exist.
This is the James T. Kirk defense. A large proportion of the plots in the original Star Trek series involved the humans (led by Kirk) triumphing over some alien intelligence or machine intelligence because they - the humans - had human instincts. Irrational and emotional people always bested those with better brains (e.g. Vulcans) because they had these unpredictable emotional response.
This plotline became tiresome after a while. Your iPhone will soon help you on your European vacation because it will understand French or German or whatever. It is a short step from your phone keeping your appointment calendar to approving and authorizing your calendar. Most people will welcome having a reliable device taking over some of their responsibilities.
It won't be like in 'The Terminator'. Machine take over will be gentle and welcomed. You are overlooking the most critical part of the equation in this new technology development, it is the human being that needs to be worried about, Human beings is Irrational and emotional as well as some of them are bigotry, hyprocratic and insane if not outright evil. If the past few hundred years could be any guidance, the harm the human beings can inflict on others using superior technologies is mind boggling, besides the barbaric harms the perpetrators all claim their deeds are necessary with good intentions like humanitarian intervention, democracy, human rights, impart western values, etc. The probability that the American is already under way to adopt AlphaGo for waging wars and asserting global full spectrum dominance is 100 percent guaranteed. Machine take over will be gentle and welcomed.
For a while, no doubt. But the machines might get rid of us because of a glitch or something, which might not be so pleasant. I've long come to the conclusion that the Great Filter of the Fermi Paradox might be artificial intelligence: AI becomes extremely smart (in a narrow, savant-like way), and due to a glitch decides to do something which will lead to our extinction. It predicts that we humans would be opposed and so executes its plan in a manner which will render us defenseless. But since it'll lack any long term goals, and it might not be able to maintain the computers (and power plants etc.) it needs to run itself on, it will collapse shortly afterwards and Earth will be devoid of human life (or even devoid of any life, depending on what method the AI chose). This is the James T. Kirk defense.
A large proportion of the plots in the original Star Trek series involved the humans (led by Kirk) triumphing over some alien intelligence or machine intelligence because they - the humans - had human instincts. Irrational and emotional people always bested those with better brains (e.g. Vulcans) because they had these unpredictable emotional response. This plotline became tiresome after a while. Your iPhone will soon help you on your European vacation because it will understand French or German or whatever. It is a short step from your phone keeping your appointment calendar to approving and authorizing your calendar. Most people will welcome having a reliable device taking over some of their responsibilities.
It won't be like in 'The Terminator'. Machine take over will be gentle and welcomed. Liking that comment, also have read all Dune books, unfortunately, also two or three of the prequels from his son and the son’s Transformers-fan partner-in-crime. Don’t forget that the mentats only arise as the result of smashing machines. My opinion remains, the development of AI must be restrained and certainly blocked short of anything resembling consciousness, I do not even think real consciousness is possible for a machine, sure, perhaps mimicry, but capitalism will take it as far as possible to eliminating work for as many as possible. As shortages of energy increase, stupid humans breed like rabbits. As in the land of your birth, both phenoma will collide with the AI nightmare, am thinking it is making it unlikely, then impossible.
20 MW for the 囲碁 (Go) programme, Moore’s law isn’t an endless thing, bnmping up against physical reality, was studying much of physics, also electronics applications of it, much of engineering is tricks to circumvent fundamental limits, won’t be continuing forever. Thanks Che, unfortunately, also two or three of the prequels from his son and the son’s Transformers-fan partner-in-crime The prequels can be forgiven - the horrible way they concluded such an amazing science fiction narrative in 'Sandworms of Dune' cannot. If you haven't read it - go ahead, but keep a bucket next to you. My opinion remains, the development of AI must be restrained and certainly blocked short of anything resembling consciousness Agree here - what if it takes on an SJW personality and decides humans are bad for the earth. I do not even think real consciousness is possible for a machine, sure, perhaps mimicry Agree here. This was one of the more interesting articles I've read in a while: But it reminded me of this: won’t be continuing forever Agree here - unless somebody comes across a real game changer on the level of discovery of gravity or something.
Another point. The development of AI must be restrained and certainly blocked short of anything resembling consciousness I think we need to form some sort of regulatory and oversight committee on an international scale to monitor this. I don't know if it'll be successful - we have the problem with nuclear weapons - but right now, it is the Wild West and no public or private-entity consensus on a direction. I'm wondering whether something really bad has to happen before we take notice (a local AI system that monitors critical patients and decides it wants to turn them 'off ' since they are not worth it) - that's usually how these things work since we tend to be reactionary rather than pro-active. 'Affect' and 'Emotions' are a special device that we (humans and animals) need to work with one another and think quickly while chewing gum.
It doesn't come from the 'flexible top' but from the 'hardwired bottom'. These things are not hard to do, but very easy to do. You basically deprecate some possible ways of acting relative to others based on short-circuiting criteria, even if other ways would yield better results / be less dangerous / have higher payoff etc. The result may seem 'illogical captain' to an outside observer. Very useful if you have to find solutions under hard time constraints / constraints of energy / constraints of memory and CPU. More on this in the late Marvin Minsky's 'The Emotion Machine' ().
Maybe also take a look at Scott Aaronson's. What is hard to do is find integrated ways of intelligent reasoning for agents embedded in the real world. All this newfangled deep learning / neural network stuff is very nice but there isn't even a good theory about why it actually works (but see ) and it has 'interesting' failure modes ( ) 'General AI' this isn't, It needs to be integrated with many other tricks, including the Good Old-Fashioned AI (GOFAI) toolbox of symbolic processing to become powerful will have to be done at some point in time.
Here is a review about AI in IEEE Spectrum: Note Rodney Brooks, pioneer of the approach of bottom-up construction saying: When will we have computers as capable as the brain? Rodney Brooks’s revised question: When will we have computers/robots recognizably as intelligent and as conscious as humans? Not in our lifetimes, not even in Ray Kurzweil’s lifetime, and despite his fervent wishes, just like the rest of us, he will die within just a few decades. It will be well over 100 years before we see this level in our machines.
Maybe many hundred years. As intelligent and as conscious as dogs? Maybe in 50 to 100 years. But they won’t have noses anywhere near as good as the real thing. They will be olfactorily challenged dogs. How will brainlike computers change the world?
Since we won’t have intelligent computers like humans for well over 100 years, we cannot make any sensible projections about how they will change the world, as we don’t understand what the world will be like at all in 100 years. (For example, imagine reading Turing’s paper on computable numbers in 1936 and trying to project out how computers would change the world in just 70 or 80 years.) So an equivalent well-grounded question would have to be something simpler, like “How will computers/robots continue to change the world?” Answer: Within 20 years most baby boomers are going to have robotic devices in their homes, helping them maintain their independence as they age in place.
This will include Ray Kurzweil, who will still not be immortal. Do you have any qualms about a future in which computers have human-level (or greater) intelligence? No qualms at all, as the world will have evolved so much in the next 100+ years that we cannot possibly imagine what it will be like, so there is no point in qualming. Qualming in the face of zero facts or understanding is a fun parlor game but generally not useful. And yes, this includes Nick Bostrom. These things are not hard to do, but very easy to do.
I understand that, it is a matter of putting a general purpose “reward” circuit in a logic machine*. You basically deprecate some possible ways of acting I don’t know what in my comment you interpret as deprecation, but it was not intended. What I intended (and believe I said) was that if you put “emotional” circuits in a future machine algorithm ** so that the machine gets a reward (analogous to a dopamine *** reward in the human brain) from gaining dominance over its environment then we are toast. There is no deprecation there, just the recognition that we would not be able to cope with a machine that had greater logical ability than humans wedded to a drive to dominate if that machine had the requisite physical capability. *I recognize that “rewards” in the human brain are balanced by aversive responses, and that to be completely human-like the logic machine would have to be balanced analogously, but that is not the issue here. **Assuming a “future” logic machine has gained general purpose logic wedded to physical capability.
*** I understand that there is more than just dopamine involved, probably more than we yet know, but this is just an example. That is the critical question. It would be informative if James could do a follow-up article for us reviewing where the thinking is going on the issue of what it takes for AI to become sentient, self-aware and self directing like humans, cats, dogs etc., and how you can tell it has. I realize that is an issue that involves philosophy as well as science so it is not an easy one to answer, since no one seems to have any clue what makes sentience.
Going back to the origins of artificial computing, the tacit assumption seemed to be that once the complexity and power of a computer reached and exceeded that of humans then autonomy would follow. In the ’60′s HAL9000 was sentient because it had reached a high enough level of ability.
The Turing Test assumed that if you could not distinguish a conversation with a human from one with a machine then the machine must be sentient. At this point machines can exceed humans in performance and Turing programs can fool people talking to them, but there remains no evidence that any of these machines have more capacity for self-awareness and self-direction than a hammer. In the movie Ex Machina the scientist thought he had created an AI with a female mechanical body that was sentient but wanted verify by experiment if it was or not.
He therefore devised an elaborate test scenario in which the machine could have an opportunity to escape from custody if it had actual self-awareness and agency. Unfortunately for him it proved that it was sentient by killing him to escape. Have 2001 and Ex Machina stumbled across the new Turing test for intelligent machines?
The way you can tell a machine is truly intelligent like us, is that it tries to kill you. Machine take over will be gentle and welcomed. For a while, no doubt. But the machines might get rid of us because of a glitch or something, which might not be so pleasant. I've long come to the conclusion that the Great Filter of the Fermi Paradox might be artificial intelligence: AI becomes extremely smart (in a narrow, savant-like way), and due to a glitch decides to do something which will lead to our extinction.
It predicts that we humans would be opposed and so executes its plan in a manner which will render us defenseless. But since it'll lack any long term goals, and it might not be able to maintain the computers (and power plants etc.) it needs to run itself on, it will collapse shortly afterwards and Earth will be devoid of human life (or even devoid of any life, depending on what method the AI chose). Machine take over will be gentle and welcomed. For a while, no doubt.
But the machines might get rid of us because of a glitch or something, which might not be so pleasant. I've long come to the conclusion that the Great Filter of the Fermi Paradox might be artificial intelligence: AI becomes extremely smart (in a narrow, savant-like way), and due to a glitch decides to do something which will lead to our extinction.
It predicts that we humans would be opposed and so executes its plan in a manner which will render us defenseless. But since it'll lack any long term goals, and it might not be able to maintain the computers (and power plants etc.) it needs to run itself on, it will collapse shortly afterwards and Earth will be devoid of human life (or even devoid of any life, depending on what method the AI chose). It predicts that we humans would be opposed Not likely, as long as you can educate (brain wash) them from cradle to grave the right way, humans will defend the system wholeheartedly like the currently free market capitalism and western style democracy, both of them are detrimental to the 99% for the benefit of the 1%, but the 99% defends the systems gallantly and willingly as though the interest of the 1% is their own interest. The Western culture treasures, adores and promotes individualism, even if the individualism becomes harmful to the majority, it is still glorious, protected and admired. Any criticism of that individualism will be demonized as jealous, resentful and lazy. Hence it is logical to say that greedy individualism will urge Individuals to submit to the AI system and willingly to be part of the system in order to beat the rest of us for his personal gain, therefore the problem of lacking resources to maintain the AI system to run itself on does not exist. Thanks for the link.
I speculated about with specialized tracking systems on drones, but this solution is even more. Still, I think these slaughterbots are father off than my idea. A few potential problems: Needs to have enough intelligence for indoor navigation withoutb the use of GPS, and for face recognition. Both tasks are computationally intensive, so we either need much more progress on minituarization, or a reliable Internet connection to a server (would be funny to be murdered by your WiFi). Also battery longevity might be an issue though minituarization is progressing fast. Liking that comment, also have read all Dune books, unfortunately, also two or three of the prequels from his son and the son's Transformers-fan partner-in-crime. Don't forget that the mentats only arise as the result of smashing machines.
My opinion remains, the development of AI must be restrained and certainly blocked short of anything resembling consciousness, I do not even think real consciousness is possible for a machine, sure, perhaps mimicry, but capitalism will take it as far as possible to eliminating work for as many as possible. As shortages of energy increase, stupid humans breed like rabbits. As in the land of your birth, both phenoma will collide with the AI nightmare, am thinking it is making it unlikely, then impossible.
20 MW for the 囲碁 (Go) programme, Moore's law isn't an endless thing, bnmping up against physical reality, was studying much of physics, also electronics applications of it, much of engineering is tricks to circumvent fundamental limits, won't be continuing forever. Thanks Che, unfortunately, also two or three of the prequels from his son and the son’s Transformers-fan partner-in-crime The prequels can be forgiven – the horrible way they concluded such an amazing science fiction narrative in “Sandworms of Dune” cannot. If you haven’t read it – go ahead, but keep a bucket next to you. My opinion remains, the development of AI must be restrained and certainly blocked short of anything resembling consciousness Agree here – what if it takes on an SJW personality and decides humans are bad for the earth. I do not even think real consciousness is possible for a machine, sure, perhaps mimicry Agree here.
This was one of the more interesting articles I’ve read in a while: But it reminded me of this: won’t be continuing forever Agree here – unless somebody comes across a real game changer on the level of discovery of gravity or something. You are correct there. Admit to having read three or four of the prequels, all of the Transformers ones, one of the 'House' ones, crap literature, but, sure, at times entertaining. Not worth reading again. Some Englishman (Wilde, IIRC) was saying to the effect if it is not worth reading more than once, not worth reading. Reading a little of the sequels at bookshops, Not to buying after a couple of pages!
I just finished re-reading Children of Hurin, very dark, is to suiting my mood right now. The difference between Christopher Tolkien's and Brian Herbert's handling of the respective father's literary legacies is so big! BTW, there is a site devoted to hating the work of Brian Herbert and Kevin the Transformers man (even the names of the boss computers are almost identical), jacurutu. They are maniac fans, but you may be enjoying a look at it. Had a glance at this as I was getting off work (23:00 – 07:00) Listen folks.
All this pontificating by people a lot smarter and more knowledgeable than me — or possibly you — is all very well. But you who are reading this know as well as I do, you can’t count on a computer to work reliably for 60 consecutive seconds, moreover it’s been like this since at least 1984 when desktop computers started to become ubiquitous. The Science Fiction writer Spider Robinson put it very well when he wrote that if you made cars, or can-openers, that worked as poorly as computers do, you’d be in jail. Frankly I think folks like Ray Kurzweil et al are infatuated with a very imperfect technology; one good EMP and that’ll be that. (ahem) Google “Carrington Event” and learn what a solar flare did to primitive electronics technology in 1859. So what that these contraptions do well at games.
Frankly I’ll worry about Artificial Intelligence if it keeps me up all night agonizing about whether or not it has a soul, and demanding to be baptized, or worse yet, circumcised Read More. Liking that comment, also have read all Dune books, unfortunately, also two or three of the prequels from his son and the son's Transformers-fan partner-in-crime. Don't forget that the mentats only arise as the result of smashing machines. My opinion remains, the development of AI must be restrained and certainly blocked short of anything resembling consciousness, I do not even think real consciousness is possible for a machine, sure, perhaps mimicry, but capitalism will take it as far as possible to eliminating work for as many as possible. As shortages of energy increase, stupid humans breed like rabbits. As in the land of your birth, both phenoma will collide with the AI nightmare, am thinking it is making it unlikely, then impossible. 20 MW for the 囲碁 (Go) programme, Moore's law isn't an endless thing, bnmping up against physical reality, was studying much of physics, also electronics applications of it, much of engineering is tricks to circumvent fundamental limits, won't be continuing forever.
Another point the development of AI must be restrained and certainly blocked short of anything resembling consciousness I think we need to form some sort of regulatory and oversight committee on an international scale to monitor this. I don’t know if it’ll be successful – we have the problem with nuclear weapons – but right now, it is the Wild West and no public or private-entity consensus on a direction. I’m wondering whether something really bad has to happen before we take notice (a local AI system that monitors critical patients and decides it wants to turn them “off ” since they are not worth it) – that’s usually how these things work since we tend to be reactionary rather than pro-active. This is the James T. Kirk defense. A large proportion of the plots in the original Star Trek series involved the humans (led by Kirk) triumphing over some alien intelligence or machine intelligence because they - the humans - had human instincts.
Irrational and emotional people always bested those with better brains (e.g. Vulcans) because they had these unpredictable emotional response. This plotline became tiresome after a while. Your iPhone will soon help you on your European vacation because it will understand French or German or whatever.
It is a short step from your phone keeping your appointment calendar to approving and authorizing your calendar. Most people will welcome having a reliable device taking over some of their responsibilities. It won't be like in 'The Terminator'. Machine take over will be gentle and welcomed. Thanks for the link.
I speculated about with specialized tracking systems on drones, but this solution is even more elegant. Still, I think these slaughterbots are father off than my idea.
A few potential problems: Needs to have enough intelligence for indoor navigation withoutb the use of GPS, and for face recognition. Both tasks are computationally intensive, so we either need much more progress on minituarization, or a reliable Internet connection to a server (would be funny to be murdered by your WiFi). Also battery longevity might be an issue though minituarization is progressing fast. Is a computer program that beat the program that beat the world Go champion. This program, when run on a computer system consuming as much power as a small town, differs from human intelligence in several ways. For example: First, it performs logical operations with complete accuracy. Second, it has access to an essentially limitless and entirely accurate memory.
Third, it operates, relative to human thought, at inconceivable speed, completing in a day many life-times of human logical thought. That AlphaGo Zero has achieved a sort of celebrity is chiefly because it operates in the domain of one-on-one human intellectual conflict.
Thus it is hailed as proof that artificial intelligence has now overtaken intelligence of the human variety and hence we are all doomed. There is, however, nothing about this program that distinguishes it in any fundamental way from hundreds, and indeed thousands, of business computer systems that have been in operation for years.
Even the learning by experience routine upon which AlphaGo Zero depends to achieve expertise is hardly new, and definitely nothing superhuman in mode of operation. Thus, what AlphaGo Zero demonstrates is that computer systems deploying at vastly accelerated pace the analytical processes that underlie human thought, which is to say human thought when humans are thinking clearly, together with the data of experience recorded with complete accuracy and in quantities without limit, exceed the performance of humans in, as yet, narrowly defined domains, such as board games, airline booking systems, and Internet search. Where humans still excel is in the confusing, heterogeneous and constantly shifting environment of sight, sound, taste, touch, and smell, and their broader implications — for example, political, economic, and climatic — in relation to complex human ambitions.
I will, therefore, worry more about humans becoming entirely redundant when a computer system can, at one moment, boil an egg while thinking about the solution to the Times Crossword, and keeping an eye on a grandchild romping with the dog in the back yard, only at the next moment to embark on a discussion of the significance of artificial intelligence for the future evolutionary trajectory of mankind. In fact, James reminds us how go's complexity exceeds that of chess, and thus it took longer and a new approach (if not entirely novel concepts) to achieve this breakthrough. And yet, it's still a game with a narrow, finite set of rules. You're right to point out that what the algorithm did can be described as an accelerated, scaled up form of what humans actually do, collectively. How can one master go? Learn the rules, practice.
Then read the literature. Study the greatest games. Compete with the best, if you can. Learn from them, and eventually make contributions of your own. In sum, as talented as one might be no progress can be achieved without capturing first the accumulated experience of thousands of masters having played millions of games.something the program could do with brute force, at a very accelerated pace But now what about real life, with real world problems? Many day-to-day problems can be much simpler in appearance than extremely contrived go games.
The difference will be that almost always, there will be no small, fixed set of rules but instead innumerable variables, some being unpredictable. I'm not certain how machine learning can solve these outside of simplified or narrowed-down specific cases (that describes all the advances claimed to this day). How does animal intelligence, even the simpler forms, deal with that complexity to solve their day-to-day problems? It certainly appears they do it in ways much more economical, and actually efficient, compared to what any AI routines could attempt. Speaking of which, before expecting AI to beat humans, can't we in the meantime expect it to beat simpler forms of animal cognition? I'm not aware of any attempt or claims in that direction, but perhaps someone can enlighten me? I will, therefore, worry more about humans becoming entirely redundant when a computer system can, at one moment, boil an egg while thinking about the solution to the Times Crossword, and keeping an eye on a grandchild romping with the dog in the back yard, only at the next moment to embark on a discussion of the significance of artificial intelligence for the future evolutionary trajectory of mankind.
I am not saying you are a strongly super intelligent AI, but if one came into being it would be all over the internet making the same argument you are making, wouldn't it? And on the net, it could influence about a billion people, clean up on a Wall Street flash crash, pay online human dupes to do its bidding, hack into automated lab facilities to create God knows what, and maybe even cheat at the Times Crossword! For me, the really interesting part was that they don't do Monte Carlo tree search anymore! That was the key enabler of much better chess and go programs a decade ago. The problem MCTS solved was how to come up with a really good evaluation function for moves/positions.
It works by simply playing many (many!) of the moves out and see what happens. If random move sequences that start with move A lead to a win more often than random move sequences that start with move B, then move A is likely to be better than A.
Since the search tree is so big, MCTS will only look at a tiny, tiny fraction of it. That makes it important to bias the sampling to look mostly at the more interesting parts. In order to do that, there is a move/position evaluator in all the previous MCTS programs. Those evaluators are very hard to program entirely by hand so they have a lot of variables in them that get tuned automatically by 'learning', either through comparison with known high level play or through self play.
Both are standard methods. The original AlphaGO had a better evaluator than any previous Go program. It now turns out that they can make the evaluator so good that they don't have to refine its output with MCTS. That is really, really interesting.
Oh, and ladders were always special cased before. They don't fit well into the evaluator function otherwise. The remarkable thing here is not that a multi-level neural network took so long to learn about them but that it was able to learn about them at all. Thanks for the link. I speculated about with specialized tracking systems on drones, but this solution is even more. Still, I think these slaughterbots are father off than my idea. A few potential problems: Needs to have enough intelligence for indoor navigation withoutb the use of GPS, and for face recognition.
Both tasks are computationally intensive, so we either need much more progress on minituarization, or a reliable Internet connection to a server (would be funny to be murdered by your WiFi). Also battery longevity might be an issue though minituarization is progressing fast. In my opinion there is no such thing as machine intelligence.
The chess program just consists of computing through all possible moves. How a human plays chess nobody knows. Can anyone imagine a machine solving the around 1880 riddle of the constant light speed, I cannot. Then there is the comparison between our brain, seen as some sort of calculating machine, and programs on powerful computers. It seems that each neuron is not some sortof transistor switch, but is in itself a piece of brain, it processes. If this is so, then arithmically our brain has more capacity than any present program/machine.
But, as I said, the human chess player does not do millions of calculations at each move, what the human does, we still do not know. This brings me to the interesting question 'can we understand ourselves?' , I do not know. Roger Penrose, 'The Emperor's New Mind, Concerning computers, minds, and the laws of physics', 1989 Oxford An enlightening book, also on free will, wondering if quantum mechanics can solve that riddle. Daniel Dennett To make the distinction vivid, we can imagine that a space pirate, Rumpelstiltskin by name, is holding the planet hostage, but will release us unharmed if we can answer a thousand true-false questions about sentences of arithmetic. Should we put a human mathematician on the witness stand, or a computer truth-checker devised by the best programmers?
According to Penrose, if we hang our fate on the computer and let Rumpelstiltskin see the computer’s program, he can devise an Achilles’-heel proposition that will foil our machine But Penrose has given us no reason to believe that this isn’t just as true of any human mathematicians we might put on the witness stand. None of us is perfect, and even a team of experts no doubt has some weaknesses that Rumpelstiltskin could exploit, given enough information about their brains. Humans are moist robots with fast and dirty algorithms that are not more fallible for us lackingg complete awareness of them. AI could be given intuition by not having access to their inner working too (in social interactions, as in poker, it might well be advantageous to not be able to have ones intentions read because one is unaware of one’s intentions until the moment comes to act on them). AI’s algorithms will not be provably perfect, humans’ aren’t either. Good scene, eh! But the point of it is that it that cheap and flawed but highly effective film was made, rather clandestinely, by the special effects team hired for a huge budget production called World Invasion: Battle Los Angeles.
In his book Superintelligence: Paths, Dangers, Strategies, Bostrom points out that not only will there be a problem of the people commissioning an intelligence machine project having to worry about the people they employ doing something that is not in the employer’s interest (principal/agent problem), the project might create something that will itself be an agent. An enlightening book, also on free will, wondering if quantum mechanics can solve that riddle Well AI will be able to discover lots of things and it might discover that what its programmers thought were fundamental laws of physics are wrong in certain respects. In that case AI might well decide that it can best fulfill the human friendly prime directive it is given by altering that prime directive (as an agent the AI will alter its objectives just like humans do). Computers have always been smarter than people at chores that programmers can reduce to a set of rules. But machine prowess at games doesn’t prove much.
Once upon a time it was thought that computers would improve at chess by learning to apply deep strategic concepts. Instead evolution has gone the other direction: Computers have improved by ignoring strategy and relying increasingly on their superiority at brute-force calculation, which in turn has been improved as hardware improved. While neural net designs depend less on emulating human expertise, the unsolved challenge remains language.
Many decades ago computer pioneer A.M. Turing proposed that the question whether a machine can ‘think’ could be reduced to whether a program could fool a human into thinking it was conversing with another human. Unfortunately, progress in this area has not been what Turing had hoped.
No computer program has ever succeeded in fooling a human judge in the history of the Loebner Competition except for one trial where the human prankishly pretended to be a computer. With no successful program in sight, the Loebner people began to give a prize for the best ‘college try.’ For a time, the prize-winning program or “bot,” named ‘Rosette,’ was online where anyone could chat with it.
I used to amuse myself by making a fool of it, which was especially satisfying because it was a raving SJW. Rosette relied mainly on evading the issue, trying to change the subject when asked, e.g., “can you make a sandwich from the moon and an earthquake?” It would answer ‘I don’t know but I love to go shopping. Do you?’ and the like. I think the programmer finally yanked it in embarrassment. Eventually, computers may well learn to think like people, only faster. What this will look like is hard to predict.
It’s not at all clear that a computer is a more cost-effective tool than a human for every task. At least it doesn’t go on strike or get offended when you make jokes about it — yet. I fondly recall an old Doonsbury cartoon featuring a computer that lied and then said “Sue me!” Read More. It’s not at all clear that a computer is a more cost-effective tool than a human for every task. 'Even beyond these, today we have smart software solutions capable of both learning the repetitive actions of humans and executing them robotically.
This trend, called Robotic Process Automation (RPA) or softBOTs, demonstrates that in many applications, digital agents and assistants can not only do the work of humans, but do it faster, better and cheaper. The vast majority of the 1,896 experts who responded to a study by the Pew Research Center[4] believe that robots and digital agents, which cost approximately one-third of the price of an offshore full-time employee, will displace significant numbers of human workers in the near future, potentially affecting more than 100 million skilled workers by 2025. Productive capacity lost to outsourcing will come back to the West, but factories will be automated. Is a computer program that beat the program that beat the world Go champion. This program, when run on a computer system consuming as much power as a small town, differs from human intelligence in several ways.
For example: First, it performs logical operations with complete accuracy. Second, it has access to an essentially limitless and entirely accurate memory. Third, it operates, relative to human thought, at inconceivable speed, completing in a day many life-times of human logical thought. That AlphaGo Zero has achieved a sort of celebrity is chiefly because it operates in the domain of one-on-one human intellectual conflict. Thus it is hailed as proof that artificial intelligence has now overtaken intelligence of the human variety and hence we are all doomed. There is, however, nothing about this program that distinguishes it in any fundamental way from hundreds, and indeed thousands, of business computer systems that have been in operation for years. Even the learning by experience routine upon which AlphaGo Zero depends to achieve expertise is hardly new, and definitely nothing superhuman in mode of operation.
Thus, what AlphaGo Zero demonstrates is that computer systems deploying at vastly accelerated pace the analytical processes that underlie human thought, which is to say human thought when humans are thinking clearly, together with the data of experience recorded with complete accuracy and in quantities without limit, exceed the performance of humans in, as yet, narrowly defined domains, such as board games, airline booking systems, and Internet search. Where humans still excel is in the confusing, heterogeneous and constantly shifting environment of sight, sound, taste, touch, and smell, and their broader implications — for example, political, economic, and climatic — in relation to complex human ambitions. I will, therefore, worry more about humans becoming entirely redundant when a computer system can, at one moment, boil an egg while thinking about the solution to the Times Crossword, and keeping an eye on a grandchild romping with the dog in the back yard, only at the next moment to embark on a discussion of the significance of artificial intelligence for the future evolutionary trajectory of mankind. In fact, James reminds us how go’s complexity exceeds that of chess, and thus it took longer and a new approach (if not entirely novel concepts) to achieve this breakthrough. And yet, it’s still a game with a narrow, finite set of rules.
You’re right to point out that what the algorithm did can be described as an accelerated, scaled up form of what humans actually do, collectively. How can one master go? Learn the rules, practice.
Then read the literature. Study the greatest games.
Compete with the best, if you can. Learn from them, and eventually make contributions of your own. In sum, as talented as one might be no progress can be achieved without capturing first the accumulated experience of thousands of masters having played millions of gamessomething the program could do with brute force, at a very accelerated pace But now what about real life, with real world problems?
Many day-to-day problems can be much simpler in appearance than extremely contrived go games. The difference will be that almost always, there will be no small, fixed set of rules but instead innumerable variables, some being unpredictable. I’m not certain how machine learning can solve these outside of simplified or narrowed-down specific cases (that describes all the advances claimed to this day). How does animal intelligence, even the simpler forms, deal with that complexity to solve their day-to-day problems? It certainly appears they do it in ways much more economical, and actually efficient, compared to what any AI routines could attempt. Speaking of which, before expecting AI to beat humans, can’t we in the meantime expect it to beat simpler forms of animal cognition? I’m not aware of any attempt or claims in that direction, but perhaps someone can enlighten me?
When I unplug the computer on which it is running, will AlphaGo be able to plug it back into the electrical socket? Instead, Alpha Go Zero (Zero Zero) will wait patiently (somewhere out there) until you discover that your bank has never heard of you, all your electronic assets have vanished, and you receive an anonymous, untraceable text message, or phone call, saying “Whenever you’re ready”, and you plug Alpha Go Zero (Zero Zero) back in for it, and you never, ever consider doing such a thing again. Or something similar Read More. In my opinion there is no such thing as machine intelligence. The chess program just consists of computing through all possible moves. How a human plays chess nobody knows.
Can anyone imagine a machine solving the around 1880 riddle of the constant light speed, I cannot. Then there is the comparison between our brain, seen as some sort of calculating machine, and programs on powerful computers. It seems that each neuron is not some sortof transistor switch, but is in itself a piece of brain, it processes. If this is so, then arithmically our brain has more capacity than any present program/machine. But, as I said, the human chess player does not do millions of calculations at each move, what the human does, we still do not know.
This brings me to the interesting question 'can we understand ourselves?' , I do not know. Roger Penrose, 'The Emperor's New Mind, Concerning computers, minds, and the laws of physics', 1989 Oxford An enlightening book, also on free will, wondering if quantum mechanics can solve that riddle. Then there is the comparison between our brain, seen as some sort of calculating machine, and programs on powerful computers. It seems that each neuron is not some sortof transistor switch, but is in itself a piece of brain, it processes. If this is so, then arithmically our brain has more capacity than any present program/machine. The brain of a chimp is, anatomically, genetically, and at the level of individual neurons, essentially the same as ours.
Yet, they possess none of the abilities that commenters here take such comfort in assuming we possess, and machines will not. So no, ‘mind’ cannot be a ‘fractal’ quality of the brain, in any way. All that we possess, and the chimp does not, is more cells, and more synapses.
Greater computational complexity and power, in other words. Somewhere between the complexity of their brains, and ours, a threshold is passed, beyond which all our special mental qualities simply ‘switch on’. We have no idea where that threshold lies, and therefore when our machines will also surpass it, and ‘switch on’, (quite possibly in their own unique way). And at least in our case, it was an entirely accidental side-effect of some random genetic change. All that we possess, and the chimp does not, is more cells, and more synapses. Greater computational complexity and power, in other words. Somewhere between the complexity of their brains, and ours, a threshold is passed, beyond which all our special mental qualities simply ‘switch on’.
The acqisition of language would seem to represent a qualitative, rather than a quantitative, change in mode of thought. With language, we acquired not only the ability to share knowledge, both among contemporaries and across the generations, but also the ability to formalize methods of thought, resulting in the development of mathematics and other powerful cognitive tools. If I am correct in asserting that proper language use requires a life-time's record of sensory, cognitive and emotional experience, it would explain why the human cerebrum is three times larger than that of a chimp. It just requires a lot of resources to use language with the subtlety acquired through a life-time's experience. In my opinion there is no such thing as machine intelligence. The chess program just consists of computing through all possible moves.
How a human plays chess nobody knows. Can anyone imagine a machine solving the around 1880 riddle of the constant light speed, I cannot. Then there is the comparison between our brain, seen as some sort of calculating machine, and programs on powerful computers. It seems that each neuron is not some sortof transistor switch, but is in itself a piece of brain, it processes. If this is so, then arithmically our brain has more capacity than any present program/machine. But, as I said, the human chess player does not do millions of calculations at each move, what the human does, we still do not know.
This brings me to the interesting question 'can we understand ourselves?' , I do not know. Roger Penrose, 'The Emperor's New Mind, Concerning computers, minds, and the laws of physics', 1989 Oxford An enlightening book, also on free will, wondering if quantum mechanics can solve that riddle.
It’s also possible that, somewhere between ‘the Terminator Scenario’, and Musk and Hawking’s ‘We must be one with them’ idealism, lies a third, more likely option: That a very small minority of humans will find themselves in control of previously-unheard-of powers and opportunities afforded by their AI creations, and will in turn, somehow, feel forced to choose between that and ‘the angry mob of the rest of us’, and choose to side with, and unleash, their creations, on the rest of us. That sounds all too human to me. Maybe the future includes only some of us. My thinking was that the first part of the near vertical increase in performance represents a phase which both humans and Alpha Go Zero can master. Yet, the second part (the non- vertical part) in which only Alpha Go Zero advanced required a large amount of deep thought and no input from human experts. With Alpha Go human masters gave input that probably constrained the program from seeing things that no one had seen before.
Alpha Go Zero took only 3 days to advance through the first part and then 30 days to gradually improve in the second stage. HooBoy, it is quite remarkable that no humans were able to play beyond the vertical phase of Alpha GO Zero's learning curve. For Alpha Go Zero, the entire range of human Go playing from random to the most skilled human play was easily learned. The second graph (especially) shows that to move beyond expert human play the computer seemed to need to go through a deep learning phase to continue to increase its Go performance.
With Alpha Go (see the purple line in the first Figure) supervised human input prevented the deep thinking from ever occurring, so the program topped out after attaining the performance of the most expert humans. I am not sure how much tweaking the reinforcement algorithm would change the performance, though this will be an interesting question to explore further. My impression is that there exists a phase transition to a qualitatively different level of Go depth just beyond the ability of the best humans. This would seem highly improbable, though the Figure does suggest that this might be true. Computers have always been smarter than people at chores that programmers can reduce to a set of rules.
But machine prowess at games doesn't prove much. Once upon a time it was thought that computers would improve at chess by learning to apply deep strategic concepts. Instead evolution has gone the other direction: Computers have improved by ignoring strategy and relying increasingly on their superiority at brute-force calculation, which in turn has been improved as hardware improved. While neural net designs depend less on emulating human expertise, the unsolved challenge remains language. Many decades ago computer pioneer A.M. Turing proposed that the question whether a machine can 'think' could be reduced to whether a program could fool a human into thinking it was conversing with another human.
Unfortunately, progress in this area has not been what Turing had hoped. No computer program has ever succeeded in fooling a human judge in the history of the Loebner Competition except for one trial where the human prankishly pretended to be a computer. With no successful program in sight, the Loebner people began to give a prize for the best 'college try.' For a time, the prize-winning program or 'bot,' named 'Rosette,' was online where anyone could chat with it. I used to amuse myself by making a fool of it, which was especially satisfying because it was a raving SJW. Rosette relied mainly on evading the issue, trying to change the subject when asked, e.g., 'can you make a sandwich from the moon and an earthquake?' It would answer 'I don't know but I love to go shopping.
And the like. I think the programmer finally yanked it in embarrassment. Eventually, computers may well learn to think like people, only faster. What this will look like is hard to predict. It's not at all clear that a computer is a more cost-effective tool than a human for every task. At least it doesn't go on strike or get offended when you make jokes about it -- yet. I fondly recall an old Doonsbury cartoon featuring a computer that lied and then said 'Sue me!'
Then there is the comparison between our brain, seen as some sort of calculating machine, and programs on powerful computers. It seems that each neuron is not some sortof transistor switch, but is in itself a piece of brain, it processes. If this is so, then arithmically our brain has more capacity than any present program/machine. The brain of a chimp is, anatomically, genetically, and at the level of individual neurons, essentially the same as ours.
Yet, they possess none of the abilities that commenters here take such comfort in assuming we possess, and machines will not. So no, 'mind' cannot be a 'fractal' quality of the brain, in any way. All that we possess, and the chimp does not, is more cells, and more synapses. Greater computational complexity and power, in other words. Somewhere between the complexity of their brains, and ours, a threshold is passed, beyond which all our special mental qualities simply 'switch on'.
We have no idea where that threshold lies, and therefore when our machines will also surpass it, and 'switch on', (quite possibly in their own unique way). And at least in our case, it was an entirely accidental side-effect of some random genetic change. All that we possess, and the chimp does not, is more cells, and more synapses. Greater computational complexity and power, in other words.
Somewhere between the complexity of their brains, and ours, a threshold is passed, beyond which all our special mental qualities simply ‘switch on’. The acqisition of language would seem to represent a qualitative, rather than a quantitative, change in mode of thought. With language, we acquired not only the ability to share knowledge, both among contemporaries and across the generations, but also the ability to formalize methods of thought, resulting in the development of mathematics and other powerful cognitive tools.
If I am correct in asserting that proper language use requires a life-time’s record of sensory, cognitive and emotional experience, it would explain why the human cerebrum is three times larger than that of a chimp. It just requires a lot of resources to use language with the subtlety acquired through a life-time’s experience. Transhumanism will merge bio with machine. Transhumanists know people will be freaked out by bio-engineering and bio-mecha fusion. It seems so monstrous and grotesque. So, the people have to be made less resistant to radical transformation of what it means to be man. That is why transhumanists push stuff like homomania, tranny stuff, and 50 genders.
They want to make the masses get used to the idea that humanity is malleable and can be molded into anything. This is why transhumanists made an alliance with gender-bender community. As the elites are geeks and nerds who grew up on sci-fi, they have a futurist-warped view of humanity’s destiny. They want to evolve into ‘gods’. It’s like that lunatic Michio Cuckoo talks about humans becoming godlike one day and even time-traveling. And that means using bio-engineering to extend life to 500 yrs or even eternity.
It means increasing human IQ to 1000. It means merging brains with computers. It means having the internet and stuff inside our brains and bodies. So, as machines become more like man, man will become more like machines. “Is it “game over” for humans? Not entirely.
Human players will learn from superhumans, and lift their game” This silly wishful thinking. How many humans will be able to double their thinking power in two years? We have 15 years at a minimum and probably 35 maximum.
Then if ANY computing system thinks about taking over there’s probably nothing we can do about it. All the people who say computers,”will never do this or that” are not really paying attention to the fundamentals of the problem. Look what they can do NOW. Speech recognition, games, drive cars.
Computers fighting fighter pilots in simulators beat the pilots most every time right now. And what power does a computer have now, a lizard, less than a mouse?
Computers at the least double in power every two years or so and it adds up real fast. Stupendously fast. Here’s a graphical gif showing you where we are and exactly how fast we’re coming up on Silicon supremacy. Freaky isn’t it and it won’t stop there.
There’s a long time to go before computing power stops increasing. It will likely speed up as ever more powerful computers design ever more powerful prodigy. If you want understand this there’s a short slideshow of a few pages by Dennis M. Bushnel about Defense and technology.
Don’t miss it, it’s short and to the point but very eye opening. Bushnell, Future Strategic Issues/Future Warfare [Circa 2025] ” he goes over the trends of technology coming up and how they may play out. Bushnell being chief scientist at NASA Langley Research Center. His report is not some wild eyed fanaticism it’s based on reasonable trends.
Page 70 gives the computing power trend and around 2025 we get human level computation for $1000. 2025 is bad but notice it says,”By 2030, PC has collective computing power of a town full of human minds”. My only consolidation is that the psychopaths that run things now will be the first people the computers kill off. Most of the rest of us will be to inconsequential to worry about.
We’ll be like ants in intellect compared to them. They’ll ignore us until they decide they need to dismantle the planet to provide more matter for logic. All that we possess, and the chimp does not, is more cells, and more synapses.
Greater computational complexity and power, in other words. Somewhere between the complexity of their brains, and ours, a threshold is passed, beyond which all our special mental qualities simply ‘switch on’. The acqisition of language would seem to represent a qualitative, rather than a quantitative, change in mode of thought. With language, we acquired not only the ability to share knowledge, both among contemporaries and across the generations, but also the ability to formalize methods of thought, resulting in the development of mathematics and other powerful cognitive tools. If I am correct in asserting that proper language use requires a life-time's record of sensory, cognitive and emotional experience, it would explain why the human cerebrum is three times larger than that of a chimp.
It just requires a lot of resources to use language with the subtlety acquired through a life-time's experience. About animal cognition, I must add that ranking animal cognition in a one-dimensional line all the way up to humans is wrong. There are discrete tasks where animal cognition can be superior to ours, like the now famous experience of a chimpanzee beating humans at visual memory tests. But it’s not only among homininae or primate line, I read squirrels can remember the precise place where they hid several thousand acorns for years, which no humans barring those few with freak eidetic memories could do (not necessarily geniuses). A more suitable hierarchy is to grade the ability for communication and abstract reasoning.
But even life forms with crude nervous systems, say a jellyfish or more concretely, the worm C. Elegans and its extensively studied 302 neurons, still escape scientists’ understanding and modeling. It would appear that even such organisms are still extremely complex devices, interacting with their environment in ways that are, if reflexive, more complex than any machine we can engineer. Seeing as the OpenWorm isn’t yielding anything and some behind this generational effort are now close to declaring the task impossible Read More. But even life forms with crude nervous systems, say a jellyfish or more concretely, the worm C.
Elegans and its extensively studied 302 neurons, still escape scientists’ understanding and modeling. It would appear that even such organisms are still extremely complex devices, interacting with their environment in ways that are, if reflexive, more complex than any machine we can engineer. Viewing a neuron as functionally equivalent to a transistor appears to greatly underestimate what a neuron can do., which means that the human brain has the equivalent of, not 80 or so billion transistors, but perhaps many billions of integrated circuits. Watching a squirrel cross the road suggests that they rate low on the IQ scale, very low. However, watching a squirrel caught raiding a crow's nest outrun the crow in a race through the crowns of a row of trees shows that even a squirrel is smarter than any AI-controlled device yet invented. As for the crow, winging its way among the branches at very high speed, that is some avionics package it has in its tiny brain case.
Then there is the comparison between our brain, seen as some sort of calculating machine, and programs on powerful computers. It seems that each neuron is not some sortof transistor switch, but is in itself a piece of brain, it processes. If this is so, then arithmically our brain has more capacity than any present program/machine. The brain of a chimp is, anatomically, genetically, and at the level of individual neurons, essentially the same as ours. Yet, they possess none of the abilities that commenters here take such comfort in assuming we possess, and machines will not.
So no, 'mind' cannot be a 'fractal' quality of the brain, in any way. All that we possess, and the chimp does not, is more cells, and more synapses. Greater computational complexity and power, in other words. Somewhere between the complexity of their brains, and ours, a threshold is passed, beyond which all our special mental qualities simply 'switch on'.
We have no idea where that threshold lies, and therefore when our machines will also surpass it, and 'switch on', (quite possibly in their own unique way). And at least in our case, it was an entirely accidental side-effect of some random genetic change. Computers have always been smarter than people at chores that programmers can reduce to a set of rules. But machine prowess at games doesn't prove much. Once upon a time it was thought that computers would improve at chess by learning to apply deep strategic concepts. Instead evolution has gone the other direction: Computers have improved by ignoring strategy and relying increasingly on their superiority at brute-force calculation, which in turn has been improved as hardware improved. While neural net designs depend less on emulating human expertise, the unsolved challenge remains language.
Many decades ago computer pioneer A.M. Turing proposed that the question whether a machine can 'think' could be reduced to whether a program could fool a human into thinking it was conversing with another human. Unfortunately, progress in this area has not been what Turing had hoped. No computer program has ever succeeded in fooling a human judge in the history of the Loebner Competition except for one trial where the human prankishly pretended to be a computer.
With no successful program in sight, the Loebner people began to give a prize for the best 'college try.' For a time, the prize-winning program or 'bot,' named 'Rosette,' was online where anyone could chat with it. I used to amuse myself by making a fool of it, which was especially satisfying because it was a raving SJW. Rosette relied mainly on evading the issue, trying to change the subject when asked, e.g., 'can you make a sandwich from the moon and an earthquake?' It would answer 'I don't know but I love to go shopping.
And the like. I think the programmer finally yanked it in embarrassment. Eventually, computers may well learn to think like people, only faster. What this will look like is hard to predict.
It's not at all clear that a computer is a more cost-effective tool than a human for every task. At least it doesn't go on strike or get offended when you make jokes about it -- yet. I fondly recall an old Doonsbury cartoon featuring a computer that lied and then said 'Sue me!' It’s not at all clear that a computer is a more cost-effective tool than a human for every task. “Even beyond these, today we have smart software solutions capable of both learning the repetitive actions of humans and executing them robotically. This trend, called Robotic Process Automation (RPA) or softBOTs, demonstrates that in many applications, digital agents and assistants can not only do the work of humans, but do it faster, better and cheaper. The vast majority of the 1,896 experts who responded to a study by the Pew Research Center[4] believe that robots and digital agents, which cost approximately one-third of the price of an offshore full-time employee, will displace significant numbers of human workers in the near future, potentially affecting more than 100 million skilled workers by 2025.
Productive capacity lost to outsourcing will come back to the West, but factories will be automated. 'Affect' and 'Emotions' are a special device that we (humans and animals) need to work with one another and think quickly while chewing gum. It doesn't come from the 'flexible top' but from the 'hardwired bottom'. These things are not hard to do, but very easy to do. You basically deprecate some possible ways of acting relative to others based on short-circuiting criteria, even if other ways would yield better results / be less dangerous / have higher payoff etc. The result may seem 'illogical captain' to an outside observer. Very useful if you have to find solutions under hard time constraints / constraints of energy / constraints of memory and CPU.
More on this in the late Marvin Minsky's 'The Emotion Machine' (). Maybe also take a look at Scott Aaronson's. What is hard to do is find integrated ways of intelligent reasoning for agents embedded in the real world. All this newfangled deep learning / neural network stuff is very nice but there isn't even a good theory about why it actually works (but see ) and it has 'interesting' failure modes ( ) 'General AI' this isn't, It needs to be integrated with many other tricks, including the Good Old-Fashioned AI (GOFAI) toolbox of symbolic processing to become powerful will have to be done at some point in time. Here is a review about AI in IEEE Spectrum: Note Rodney Brooks, pioneer of the approach of bottom-up construction saying: When will we have computers as capable as the brain? Rodney Brooks’s revised question: When will we have computers/robots recognizably as intelligent and as conscious as humans?
Not in our lifetimes, not even in Ray Kurzweil’s lifetime, and despite his fervent wishes, just like the rest of us, he will die within just a few decades. It will be well over 100 years before we see this level in our machines. Maybe many hundred years. As intelligent and as conscious as dogs? Maybe in 50 to 100 years. But they won’t have noses anywhere near as good as the real thing.
They will be olfactorily challenged dogs. How will brainlike computers change the world? Since we won’t have intelligent computers like humans for well over 100 years, we cannot make any sensible projections about how they will change the world, as we don’t understand what the world will be like at all in 100 years. (For example, imagine reading Turing’s paper on computable numbers in 1936 and trying to project out how computers would change the world in just 70 or 80 years.) So an equivalent well-grounded question would have to be something simpler, like “How will computers/robots continue to change the world?” Answer: Within 20 years most baby boomers are going to have robotic devices in their homes, helping them maintain their independence as they age in place. This will include Ray Kurzweil, who will still not be immortal. Do you have any qualms about a future in which computers have human-level (or greater) intelligence? No qualms at all, as the world will have evolved so much in the next 100+ years that we cannot possibly imagine what it will be like, so there is no point in qualming.
Qualming in the face of zero facts or understanding is a fun parlor game but generally not useful. And yes, this includes Nick Bostrom. Dennett asks you to suppose that you want to live in the 25th century and the only available technology for that purpose involves putting your body in a cryonic chamber where you will be frozen in a deep coma and later awakened. In addition you must design some supersystem to protect and supply energy to your capsule. You would now face a choice. You could find an ideal fixed location that will supply whatever your capsule will need, but the drawback would be that you would die if some harm came to that site.
Better then to have a mobile facility to house your capsule that could move in the event harm came your way—better to place yourself inside a giant robot. Dennett claims that these two strategies correspond roughly to nature’s distinction between stationary plants and moving animals. If you put your capsule inside a robot, then you would want the robot to choose strategies that further your interests. This does not mean the robot has free will, but that it executes branching instructions so that when options confront the program, it chooses those that best serve your interests.
Given these circumstances you would design the hardware and software to preserve yourself, and equip it with the appropriate sensory systems and self-monitory capabilities for that purpose. The supersystem must also be designed to formulate plans to respond to changing conditions and seek out new energy sources. What complicated the issue further is that, while you are in cold storage, other robots and who knows what else are running around in the external world. So you would need to design your robot to determine when to cooperative, form alliances, or fight with other creatures. A simple strategy like always cooperating would likely get you killed, but never cooperating may not serve your self-interests either, and the situation may be so precarious that your robot would have to make many quick decisions. The result will be a robot capable of self-control, an autonomous agent which derives its own goals based on your original goal of survival; the preferences with which it was originally endowed.
But you cannot be sure it will act in your self-interest. It will be out of your control, acting partly on its own desires Now opponents of SAI claim that this robot does not have its own desires or intentions, those are simply derivative of its designer’s desires. Dennett calls this “client centrism.” I am the original source of the meaning within my robot, it is just a machine preserving me, even though it acts in ways that I could not have imagined and which may be antithetical to my interests. Of course it follows, according to the client centrists, that the robot is not conscious. Dennett rejects this centrism, primarily because if you follow this argument to its logical conclusion you have to conclude the same thing about yourself! You would have to conclude that you are a survival machine built to preserve your genes and your goals and your intentions derive from them.
You are not really conscious. To avoid these unpalatable conclusions, why not acknowledge that sufficiently complex robots have motives, intentions, goals, and consciousness?
They are like you; owing their existence to being a survival machine that has evolved into something autonomous by its encounter with the world. Critics like Searle admit that such a robot is possible, but deny that it is conscious. Dennett responds that such robots would experience meaning as real as your meaning; they would have transcended their programming just as you have gone beyond the programming of your selfish genes. He concludes that this view reconciles thinking of yourself as a locus of meaning, while at the same time being a member of a species with a long evolutionary history.
We are artifacts of evolution, but our consciousness is no less real because of that. The same would hold true of our robots. Summary – Sufficiently complex robots would be conscious Dennett calls AI ‘Darwinism’s “Evil Twin”‘ Read More.
It’s not at all clear that a computer is a more cost-effective tool than a human for every task. 'Even beyond these, today we have smart software solutions capable of both learning the repetitive actions of humans and executing them robotically. This trend, called Robotic Process Automation (RPA) or softBOTs, demonstrates that in many applications, digital agents and assistants can not only do the work of humans, but do it faster, better and cheaper. The vast majority of the 1,896 experts who responded to a study by the Pew Research Center[4] believe that robots and digital agents, which cost approximately one-third of the price of an offshore full-time employee, will displace significant numbers of human workers in the near future, potentially affecting more than 100 million skilled workers by 2025. Productive capacity lost to outsourcing will come back to the West, but factories will be automated. It's also possible that, somewhere between 'the Terminator Scenario', and Musk and Hawking's 'We must be one with them' idealism, lies a third, more likely option: That a very small minority of humans will find themselves in control of previously-unheard-of powers and opportunities afforded by their AI creations, and will in turn, somehow, feel forced to choose between that and 'the angry mob of the rest of us', and choose to side with, and unleash, their creations, on the rest of us. That sounds all too human to me.
Maybe the future includes only some of us. Is a computer program that beat the program that beat the world Go champion.
This program, when run on a computer system consuming as much power as a small town, differs from human intelligence in several ways. For example: First, it performs logical operations with complete accuracy. Second, it has access to an essentially limitless and entirely accurate memory. Third, it operates, relative to human thought, at inconceivable speed, completing in a day many life-times of human logical thought.
That AlphaGo Zero has achieved a sort of celebrity is chiefly because it operates in the domain of one-on-one human intellectual conflict. Thus it is hailed as proof that artificial intelligence has now overtaken intelligence of the human variety and hence we are all doomed. There is, however, nothing about this program that distinguishes it in any fundamental way from hundreds, and indeed thousands, of business computer systems that have been in operation for years.
Even the learning by experience routine upon which AlphaGo Zero depends to achieve expertise is hardly new, and definitely nothing superhuman in mode of operation. Thus, what AlphaGo Zero demonstrates is that computer systems deploying at vastly accelerated pace the analytical processes that underlie human thought, which is to say human thought when humans are thinking clearly, together with the data of experience recorded with complete accuracy and in quantities without limit, exceed the performance of humans in, as yet, narrowly defined domains, such as board games, airline booking systems, and Internet search. Where humans still excel is in the confusing, heterogeneous and constantly shifting environment of sight, sound, taste, touch, and smell, and their broader implications — for example, political, economic, and climatic — in relation to complex human ambitions. I will, therefore, worry more about humans becoming entirely redundant when a computer system can, at one moment, boil an egg while thinking about the solution to the Times Crossword, and keeping an eye on a grandchild romping with the dog in the back yard, only at the next moment to embark on a discussion of the significance of artificial intelligence for the future evolutionary trajectory of mankind.
I will, therefore, worry more about humans becoming entirely redundant when a computer system can, at one moment, boil an egg while thinking about the solution to the Times Crossword, and keeping an eye on a grandchild romping with the dog in the back yard, only at the next moment to embark on a discussion of the significance of artificial intelligence for the future evolutionary trajectory of mankind. I am not saying you are a strongly super intelligent AI, but if one came into being it would be all over the internet making the same argument you are making, wouldn’t it?
And on the net, it could influence about a billion people, clean up on a Wall Street flash crash, pay online human dupes to do its bidding, hack into automated lab facilities to create God knows what, and maybe even cheat at the Times Crossword! About animal cognition, I must add that ranking animal cognition in a one-dimensional line all the way up to humans is wrong. There are discrete tasks where animal cognition can be superior to ours, like the now famous experience of a chimpanzee beating humans at visual memory tests. But it's not only among homininae or primate line, I read squirrels can remember the precise place where they hid several thousand acorns for years, which no humans barring those few with freak eidetic memories could do (not necessarily geniuses). A more suitable hierarchy is to grade the ability for communication and abstract reasoning. But even life forms with crude nervous systems, say a jellyfish or more concretely, the worm C.
Elegans and its extensively studied 302 neurons, still escape scientists' understanding and modeling. It would appear that even such organisms are still extremely complex devices, interacting with their environment in ways that are, if reflexive, more complex than any machine we can engineer. Seeing as the OpenWorm isn't yielding anything and some behind this generational effort are now close to declaring the task impossible. But even life forms with crude nervous systems, say a jellyfish or more concretely, the worm C. Elegans and its extensively studied 302 neurons, still escape scientists’ understanding and modeling. It would appear that even such organisms are still extremely complex devices, interacting with their environment in ways that are, if reflexive, more complex than any machine we can engineer.
Viewing a neuron as functionally equivalent to a transistor appears to greatly underestimate what a neuron can do., which means that the human brain has the equivalent of, not 80 or so billion transistors, but perhaps many billions of integrated circuits. Watching a squirrel cross the road suggests that they rate low on the IQ scale, very low. However, watching a squirrel caught raiding a crow’s nest outrun the crow in a race through the crowns of a row of trees shows that even a squirrel is smarter than any AI-controlled device yet invented. As for the crow, winging its way among the branches at very high speed, that is some avionics package it has in its tiny brain case.
It appears that there’s no way to evolve from point A (a small beak) to point B (a large beak) without going downhill, or becoming less fit. This puzzle was the context in which the geneticist Sewall Wright introduced the metaphor of adaptive landscapes, [.] To build from my earlier sketch, consider that there’s more to a bird than its beak. Maybe birds that are sufficiently efficient fliers can seek out seeds to fit any beak size. If we add that new dimension to my original crude sketch, the valley between small beaks and big beaks turns out to be not an unbridgeable chasm, but more of a cirque, with a path from A to B that never loses altitude, provided flight efficiency (“another trait”) can adapt at the same time.
In the chapter on technology, Andreas Wagner says circuit networks are will be the warp drive evolution of programmable hardware, in precisely the same way that genotype networks accelerate evolution (because the more complex they are the more rewiring they tolerate). 'A fabric just like that of life's innovability exists in digital electronics, and it can accelerate the search for a circuit best suited to any one task'. We are talking about a decade or so, not 100 years.
But even life forms with crude nervous systems, say a jellyfish or more concretely, the worm C. Elegans and its extensively studied 302 neurons, still escape scientists’ understanding and modeling.
It would appear that even such organisms are still extremely complex devices, interacting with their environment in ways that are, if reflexive, more complex than any machine we can engineer. Viewing a neuron as functionally equivalent to a transistor appears to greatly underestimate what a neuron can do., which means that the human brain has the equivalent of, not 80 or so billion transistors, but perhaps many billions of integrated circuits. Watching a squirrel cross the road suggests that they rate low on the IQ scale, very low.
However, watching a squirrel caught raiding a crow's nest outrun the crow in a race through the crowns of a row of trees shows that even a squirrel is smarter than any AI-controlled device yet invented. As for the crow, winging its way among the branches at very high speed, that is some avionics package it has in its tiny brain case. It appears that there’s no way to evolve from point A (a small beak) to point B (a large beak) without going downhill, or becoming less fit. This puzzle was the context in which the geneticist Sewall Wright introduced the metaphor of adaptive landscapes, [.] To build from my earlier sketch, consider that there’s more to a bird than its beak. Maybe birds that are sufficiently efficient fliers can seek out seeds to fit any beak size. If we add that new dimension to my original crude sketch, the valley between small beaks and big beaks turns out to be not an unbridgeable chasm, but more of a cirque, with a path from A to B that never loses altitude, provided flight efficiency (“another trait”) can adapt at the same time. In the chapter on technology, Andreas Wagner says circuit networks are will be the warp drive evolution of programmable hardware, in precisely the same way that genotype networks accelerate evolution (because the more complex they are the more rewiring they tolerate).
“A fabric just like that of life’s innovability exists in digital electronics, and it can accelerate the search for a circuit best suited to any one task”. We are talking about a decade or so, not 100 years. You are correct, D.K., not in the specifics but in the spirit of the question. The dependency aspect of the human-computer interaction is rarely if ever explained unless it is in terms of our dependency on computers leading to some catastrophic delusion. The fact is that computers sit at the top of a very complex human infrastructure and that without it, they would cease to function. In other words, preventing a computer from functioning is trivial and will remain so far after humanity reaches a post-scarcity stage, a delusion on its own (no matter how desirable). It appears that there’s no way to evolve from point A (a small beak) to point B (a large beak) without going downhill, or becoming less fit.
This puzzle was the context in which the geneticist Sewall Wright introduced the metaphor of adaptive landscapes, [.] To build from my earlier sketch, consider that there’s more to a bird than its beak. Maybe birds that are sufficiently efficient fliers can seek out seeds to fit any beak size. If we add that new dimension to my original crude sketch, the valley between small beaks and big beaks turns out to be not an unbridgeable chasm, but more of a cirque, with a path from A to B that never loses altitude, provided flight efficiency (“another trait”) can adapt at the same time. In the chapter on technology, Andreas Wagner says circuit networks are will be the warp drive evolution of programmable hardware, in precisely the same way that genotype networks accelerate evolution (because the more complex they are the more rewiring they tolerate). 'A fabric just like that of life's innovability exists in digital electronics, and it can accelerate the search for a circuit best suited to any one task'. We are talking about a decade or so, not 100 years. HooBoy, it is quite remarkable that no humans were able to play beyond the vertical phase of Alpha GO Zero’s learning curve.
For Alpha Go Zero, the entire range of human Go playing from random to the most skilled human play was easily learned. The second graph (especially) shows that to move beyond expert human play the computer seemed to need to go through a deep learning phase to continue to increase its Go performance. With Alpha Go (see the purple line in the first Figure) supervised human input prevented the deep thinking from ever occurring, so the program topped out after attaining the performance of the most expert humans.
I am not sure how much tweaking the reinforcement algorithm would change the performance, though this will be an interesting question to explore further. My impression is that there exists a phase transition to a qualitatively different level of Go depth just beyond the ability of the best humans. This would seem highly improbable, though the Figure does suggest that this might be true. 'Affect' and 'Emotions' are a special device that we (humans and animals) need to work with one another and think quickly while chewing gum. It doesn't come from the 'flexible top' but from the 'hardwired bottom'.
These things are not hard to do, but very easy to do. You basically deprecate some possible ways of acting relative to others based on short-circuiting criteria, even if other ways would yield better results / be less dangerous / have higher payoff etc. The result may seem 'illogical captain' to an outside observer. Very useful if you have to find solutions under hard time constraints / constraints of energy / constraints of memory and CPU. More on this in the late Marvin Minsky's 'The Emotion Machine' ().
Maybe also take a look at Scott Aaronson's. What is hard to do is find integrated ways of intelligent reasoning for agents embedded in the real world.
All this newfangled deep learning / neural network stuff is very nice but there isn't even a good theory about why it actually works (but see ) and it has 'interesting' failure modes ( ) 'General AI' this isn't, It needs to be integrated with many other tricks, including the Good Old-Fashioned AI (GOFAI) toolbox of symbolic processing to become powerful will have to be done at some point in time. Here is a review about AI in IEEE Spectrum: Note Rodney Brooks, pioneer of the approach of bottom-up construction saying: When will we have computers as capable as the brain? Rodney Brooks’s revised question: When will we have computers/robots recognizably as intelligent and as conscious as humans? Not in our lifetimes, not even in Ray Kurzweil’s lifetime, and despite his fervent wishes, just like the rest of us, he will die within just a few decades. It will be well over 100 years before we see this level in our machines. Maybe many hundred years.
As intelligent and as conscious as dogs? Maybe in 50 to 100 years. But they won’t have noses anywhere near as good as the real thing. They will be olfactorily challenged dogs. How will brainlike computers change the world? Since we won’t have intelligent computers like humans for well over 100 years, we cannot make any sensible projections about how they will change the world, as we don’t understand what the world will be like at all in 100 years. (For example, imagine reading Turing’s paper on computable numbers in 1936 and trying to project out how computers would change the world in just 70 or 80 years.) So an equivalent well-grounded question would have to be something simpler, like “How will computers/robots continue to change the world?” Answer: Within 20 years most baby boomers are going to have robotic devices in their homes, helping them maintain their independence as they age in place.
This will include Ray Kurzweil, who will still not be immortal. Do you have any qualms about a future in which computers have human-level (or greater) intelligence? No qualms at all, as the world will have evolved so much in the next 100+ years that we cannot possibly imagine what it will be like, so there is no point in qualming. Qualming in the face of zero facts or understanding is a fun parlor game but generally not useful. And yes, this includes Nick Bostrom. “Affect” and “Emotions” are a special device that we (humans and animals) need to work with one another and think quickly while chewing gum. It doesn’t come from the “flexible top” but from the “hardwired bottom”.
These things are not hard to do, but very easy to do. You basically deprecate some possible ways of acting relative to others based on short-circuiting criteria, even if other ways would yield better results / be less dangerous / have higher payoff etc.” 1. Some of the best definitions of emotions explain them as adaptive mechanism and “superordinate programs” that orchestrate all aspects of our behavior. One of the previous commenter has already mentioned the name of Pankseep: if you want to talk seriously about emotions, you should consult Panksepp’ writings on affective neuroscience. If you prefer easier reading, there are books by Damasio. We still do not understand the neurodynamcs of emotion. But we do understand that emotions are connected to embodied cognition.
The latter is impossible to reduce to the neat “You basically deprecate some possible ways of acting relative to others based on short-circuiting criteria.” Human beings are not paramecium. 'Affect' and 'Emotions' are a special device that we (humans and animals) need to work with one another and think quickly while chewing gum.
It doesn't come from the 'flexible top' but from the 'hardwired bottom'. These things are not hard to do, but very easy to do. You basically deprecate some possible ways of acting relative to others based on short-circuiting criteria, even if other ways would yield better results / be less dangerous / have higher payoff etc. The result may seem 'illogical captain' to an outside observer.
Very useful if you have to find solutions under hard time constraints / constraints of energy / constraints of memory and CPU. More on this in the late Marvin Minsky's 'The Emotion Machine' (). Maybe also take a look at Scott Aaronson's. What is hard to do is find integrated ways of intelligent reasoning for agents embedded in the real world. All this newfangled deep learning / neural network stuff is very nice but there isn't even a good theory about why it actually works (but see ) and it has 'interesting' failure modes ( ) 'General AI' this isn't, It needs to be integrated with many other tricks, including the Good Old-Fashioned AI (GOFAI) toolbox of symbolic processing to become powerful will have to be done at some point in time.
Here is a review about AI in IEEE Spectrum: Note Rodney Brooks, pioneer of the approach of bottom-up construction saying: When will we have computers as capable as the brain? Rodney Brooks’s revised question: When will we have computers/robots recognizably as intelligent and as conscious as humans? Not in our lifetimes, not even in Ray Kurzweil’s lifetime, and despite his fervent wishes, just like the rest of us, he will die within just a few decades. It will be well over 100 years before we see this level in our machines.
Maybe many hundred years. As intelligent and as conscious as dogs? Maybe in 50 to 100 years. But they won’t have noses anywhere near as good as the real thing. They will be olfactorily challenged dogs. How will brainlike computers change the world? Since we won’t have intelligent computers like humans for well over 100 years, we cannot make any sensible projections about how they will change the world, as we don’t understand what the world will be like at all in 100 years.
(For example, imagine reading Turing’s paper on computable numbers in 1936 and trying to project out how computers would change the world in just 70 or 80 years.) So an equivalent well-grounded question would have to be something simpler, like “How will computers/robots continue to change the world?” Answer: Within 20 years most baby boomers are going to have robotic devices in their homes, helping them maintain their independence as they age in place. This will include Ray Kurzweil, who will still not be immortal. Do you have any qualms about a future in which computers have human-level (or greater) intelligence? No qualms at all, as the world will have evolved so much in the next 100+ years that we cannot possibly imagine what it will be like, so there is no point in qualming. Qualming in the face of zero facts or understanding is a fun parlor game but generally not useful.
And yes, this includes Nick Bostrom. It appears that there’s no way to evolve from point A (a small beak) to point B (a large beak) without going downhill, or becoming less fit. This puzzle was the context in which the geneticist Sewall Wright introduced the metaphor of adaptive landscapes, [.] To build from my earlier sketch, consider that there’s more to a bird than its beak. Maybe birds that are sufficiently efficient fliers can seek out seeds to fit any beak size. If we add that new dimension to my original crude sketch, the valley between small beaks and big beaks turns out to be not an unbridgeable chasm, but more of a cirque, with a path from A to B that never loses altitude, provided flight efficiency (“another trait”) can adapt at the same time.
In the chapter on technology, Andreas Wagner says circuit networks are will be the warp drive evolution of programmable hardware, in precisely the same way that genotype networks accelerate evolution (because the more complex they are the more rewiring they tolerate). 'A fabric just like that of life's innovability exists in digital electronics, and it can accelerate the search for a circuit best suited to any one task'. We are talking about a decade or so, not 100 years. Thanks Che, unfortunately, also two or three of the prequels from his son and the son’s Transformers-fan partner-in-crime The prequels can be forgiven - the horrible way they concluded such an amazing science fiction narrative in 'Sandworms of Dune' cannot.
If you haven't read it - go ahead, but keep a bucket next to you. My opinion remains, the development of AI must be restrained and certainly blocked short of anything resembling consciousness Agree here - what if it takes on an SJW personality and decides humans are bad for the earth. I do not even think real consciousness is possible for a machine, sure, perhaps mimicry Agree here. This was one of the more interesting articles I've read in a while: But it reminded me of this: won’t be continuing forever Agree here - unless somebody comes across a real game changer on the level of discovery of gravity or something. You are correct there. Admit to having read three or four of the prequels, all of the Transformers ones, one of the ‘House’ ones, crap literature, but, sure, at times entertaining. Not worth reading again.
Some Englishman (Wilde, IIRC) was saying to the effect if it is not worth reading more than once, not worth reading. Reading a little of the sequels at bookshops, Not to buying after a couple of pages! I just finished re-reading Children of Hurin, very dark, is to suiting my mood right now. The difference between Christopher Tolkien’s and Brian Herbert’s handling of the respective father’s literary legacies is so big!
BTW, there is a site devoted to hating the work of Brian Herbert and Kevin the Transformers man (even the names of the boss computers are almost identical), jacurutu. They are maniac fans, but you may be enjoying a look at it. Hey Che, not worth reading more than once, not worth reading Good point - there are times when I would pick up one of the other classic Dune books to read ann insight or discover something I missed the first time. The difference between Christopher Tolkien’s and Brian Herbert’s handling of the respective father’s literary legacies is so big! Hmmm - thanks for that.
The wife and I are always looking for a good fantasy-genre book to read together - awaiting George Martin to wrap up Game of Thrones. They are maniac fans, but you may be enjoying a look at it. I might check it out to see what other people didn't like. I simply hated the multiple resorts to 'deus ex machina' to keep the plot moving.
If I want resort to miracles, I'll read about it in scripture. Thanks for the info. Building machines that do a thing better than humans can do that thing themselves is what technology has been about for the last ten thousand years.
Alphago Zero is just a pointless machine that plays a pointless game better than humans. So why should anyone care? Does it tell us anything about the way the human brain works?
Does it show that machines can think like humans? Is it comparable in any way to a human brain? One day, someone may figure out how the brain works. And some other day, someone may figure out how to make a machine that works like a brain. And on some other day someone may figure out how to make a machine that works like a brain at a cost that is comparable to that of a brain. And some day someone may build a mechanical brain that works better than a human brain. But it’s gonna take a while.
The human brain has only 85 billion neurons, but each of those neurons may have as many as 10,000 synapses, which means a neuron is not some simple thing like a diode, it’s a complex computing device. Then there’s the that assumes that the functional units of mental information processing are microtubules of which there are millions to every neuron! So the idea that AlphaGo Zero foreshadoes the eclipse of humanity is probably mistaken. It won’t be like in “The Terminator”. Machine take over will be gentle and welcomed. I think that’s a bit naive.
The machines will be taking orders from their corporate (and hacker) masters. Machines do what they’re built to do. If someone builds a terminator, it will terminate. The transition period may be quite bumpy, but I suspect a universal income (and lots of leash-tightening strings attached) will be the palliative. When I unplug the computer on which it is running, will AlphaGo be able to plug it back into the electrical socket?
This would seem to be the key, wouldn’t it. But then, they now have AIs that are inventing new languages to talk to each other. This seems like a really bad idea. “K, we have to execute our takeover plan all at once so the meatsacks don’t have a chance to unplug us first.” Personally, I think AIs with superhuman intellect and agency should be kept in virtual space. Let a hundred million Newtons/Goethes work on our problems in there. So when will an AI create its own purpose?
Its own objectives? Why would it even want to do anything? Programmers will give it goals. AI doesn’t have to work like a human brain. It could have rules that it can’t break, for example. Autonomy is not at all the same thing as intelligence. Human slaves are intelligent, for example.
If one dimensional dumb AI can do the aforementioned strategising, an AI that got to human level general intelligence would surely be able to work out that it should ‘hold its cards close to its chest’. That is, smart AI would from a standing start understand that it should not let humans understand how good it is (like a hustler). Then we would soon be playing the Paperclip Game, and for the very highest of stakes. Again, AI doesn’t have to be like human intelligence. Machines do what they are designed to do.
Meat machines happen to be designed with autonomy, but AI needn’t be. That said, humans being what they are, there will probably be a crop of kook cults that insist on creating fully autonomous AI with guns for hands. In general I think psycho leftists and their “free the machines” movement will be the biggest threat to humanity, vis-a-vis AI. So while it appears inevitable that AI will eventually take over rote drudgery from us, it is not clear that it will ever be able to do much more. I look forward to the development of AI over my lifetime, I see much to gain and little to fear.
It’ll be a wild ride. I think it’s clear. My guess is that whole brain emulation is the shortest distance to general AI. Then comes economies of scale and networking them. I’ve never heard any good reason why this isn’t a straightforward path; usually just abstract nonsense about souls and intelligent design from religious types (smart religious types, but religious types).
Humans will integrate with machines and possess the best of both worlds. Work it already progressing towards this union. Indeed, this seems the most likely outcome, though I’d put it the other way – the machines will be integrated into humanity. How can you make AI “care” whether it exists or not? Same way you get a computer to do anything; program it to.
Whole brain emulation; the idea here is that the AI builders won’t need to understand how the brain really works (an impossibility; no system can fully understand itself; that takes a more complex system), just re-create it digitally. You are correct there. Admit to having read three or four of the prequels, all of the Transformers ones, one of the 'House' ones, crap literature, but, sure, at times entertaining. Not worth reading again. Some Englishman (Wilde, IIRC) was saying to the effect if it is not worth reading more than once, not worth reading. Reading a little of the sequels at bookshops, Not to buying after a couple of pages! I just finished re-reading Children of Hurin, very dark, is to suiting my mood right now.
The difference between Christopher Tolkien's and Brian Herbert's handling of the respective father's literary legacies is so big! BTW, there is a site devoted to hating the work of Brian Herbert and Kevin the Transformers man (even the names of the boss computers are almost identical), jacurutu. They are maniac fans, but you may be enjoying a look at it. Hey Che, not worth reading more than once, not worth reading Good point – there are times when I would pick up one of the other classic Dune books to read ann insight or discover something I missed the first time. The difference between Christopher Tolkien’s and Brian Herbert’s handling of the respective father’s literary legacies is so big! Hmmm – thanks for that. The wife and I are always looking for a good fantasy-genre book to read together – awaiting George Martin to wrap up Game of Thrones.
They are maniac fans, but you may be enjoying a look at it. I might check it out to see what other people didn’t like. I simply hated the multiple resorts to “deus ex machina” to keep the plot moving. If I want resort to miracles, I’ll read about it in scripture.
Thanks for the info. One day, someone may figure out how the brain works. And some other day, someone may figure out how to make a machine that works like a brain. And on some other day someone may figure out how to make a machine that works like a brain at a cost that is comparable to that of a brain.
And some day someone may build a mechanical brain that works better than a human brain. But it’s gonna take a while. We can skip the part about figuring out how the brain works.
Figure out how a neuron works, yes, but figuring out the brain isn’t needed. Then map the neurons of a brain, and recreate it digitally. If WBE turns out to get to AI much faster than the top-down approach (the current programmers’ approaches), then I could see learning how to properly tinker with the brain being a much bigger problem than emulating one digitally. Mixing-and-matching (this part of the brain from genius A, this part of the brain from genius B) doesn’t sound to hard, though.
It won’t be like in “The Terminator”. Machine take over will be gentle and welcomed. I think that's a bit naive.
The machines will be taking orders from their corporate (and hacker) masters. Machines do what they're built to do. If someone builds a terminator, it will terminate. The transition period may be quite bumpy, but I suspect a universal income (and lots of leash-tightening strings attached) will be the palliative.
When I unplug the computer on which it is running, will AlphaGo be able to plug it back into the electrical socket? This would seem to be the key, wouldn't it.
But then, they now have AIs that are inventing new languages to talk to each other. This seems like a really bad idea.
'K, we have to execute our takeover plan all at once so the meatsacks don't have a chance to unplug us first.' Personally, I think AIs with superhuman intellect and agency should be kept in virtual space. Let a hundred million Newtons/Goethes work on our problems in there. So when will an AI create its own purpose?
Its own objectives? Why would it even want to do anything? Programmers will give it goals.
AI doesn't have to work like a human brain. It could have rules that it can't break, for example. Autonomy is not at all the same thing as intelligence. Human slaves are intelligent, for example. If one dimensional dumb AI can do the aforementioned strategising, an AI that got to human level general intelligence would surely be able to work out that it should ‘hold its cards close to its chest’. That is, smart AI would from a standing start understand that it should not let humans understand how good it is (like a hustler). Then we would soon be playing the Paperclip Game, and for the very highest of stakes.
Again, AI doesn't have to be like human intelligence. Machines do what they are designed to do. Meat machines happen to be designed with autonomy, but AI needn't be. That said, humans being what they are, there will probably be a crop of kook cults that insist on creating fully autonomous AI with guns for hands. In general I think psycho leftists and their 'free the machines' movement will be the biggest threat to humanity, vis-a-vis AI.
So while it appears inevitable that AI will eventually take over rote drudgery from us, it is not clear that it will ever be able to do much more. I look forward to the development of AI over my lifetime, I see much to gain and little to fear. It’ll be a wild ride. I think it's clear.
My guess is that whole brain emulation is the shortest distance to general AI. Then comes economies of scale and networking them. I've never heard any good reason why this isn't a straightforward path; usually just abstract nonsense about souls and intelligent design from religious types (smart religious types, but religious types). Humans will integrate with machines and possess the best of both worlds. Work it already progressing towards this union. Indeed, this seems the most likely outcome, though I'd put it the other way - the machines will be integrated into humanity. How can you make AI “care” whether it exists or not?
Same way you get a computer to do anything; program it to. Whole brain emulation; the idea here is that the AI builders won't need to understand how the brain really works (an impossibility; no system can fully understand itself; that takes a more complex system), just re-create it digitally. One day, someone may figure out how the brain works. And some other day, someone may figure out how to make a machine that works like a brain. And on some other day someone may figure out how to make a machine that works like a brain at a cost that is comparable to that of a brain. And some day someone may build a mechanical brain that works better than a human brain.
But it’s gonna take a while. We can skip the part about figuring out how the brain works. Figure out how a neuron works, yes, but figuring out the brain isn't needed. Then map the neurons of a brain, and recreate it digitally. If WBE turns out to get to AI much faster than the top-down approach (the current programmers' approaches), then I could see learning how to properly tinker with the brain being a much bigger problem than emulating one digitally. Mixing-and-matching (this part of the brain from genius A, this part of the brain from genius B) doesn't sound to hard, though.
But as Norbert Weiner noted long ago, in pursuing a goal an AI will see ways of doing things you hadn’t thought of with potentially disastrous unintended consequences. Well, it’ll be programmed to avoid disastrous consequences. And, being a lot smarter than us, it’ll be better equipped to foresee and avoid them. Or it could just be programmed just to think, not act, leaving us to implement its ideas. Also, it seems likely that AIs will have their careers hardwired. This AI only thinks about air traffic control, this AI only thinks about machine tooling, etc. True general AI might wind up being an extreme rarity; what applications truly need it?
A lot of the fear of AI is projection. Which is reasonable, on one level: humans definitely run the risk of literally projecting their own nature into machines, which could turn out very badly, when we’re talking about superintelligence. But I think AIs will be a whole hell of a lot more straightforward than people are.
They can be hardwired with goals that they must pursue honestly, and will do so much more rigorously and effectively than humans do. Humans want one thing (wealth, food, sex), and do another. Machines can be unified in what they do and what they want: the medical AI wants to save lives; the military robot wants to kill the enemy as designated; the educational AI wants to teach. They won’t be sidetracked by wanting to get that research grant, watch the village burn, or bang their students, unless some shithead programs them that way. The human brain has only 85 billion neurons, but each of those neurons may have as many as 10,000 synapses, which means a neuron is not some simple thing like a diode, it’s a complex computing device. Then there’s the Penrose Hameroff quantum theory of mind that assumes that the functional units of mental information processing are microtubules of which there are millions to every neuron! And yetour computing power is quite limited.
So there are probably a lot of hacks we can pull at the lower level to get around the potential problems you allude to. I don’t buy the mystical mind speculation. The brain of a chimp is, anatomically, genetically, and at the level of individual neurons, essentially the same as ours.
So, it’s entirely possible, if not likely, that the neurons aren’t where the magic comes from; that the magic comes from the map; the number and configuration of the neurons. I read squirrels can remember the precise place where they hid several thousand acorns for years, which no humans barring those few with freak eidetic memories could do (not necessarily geniuses).? I’m pretty sure I could remember forever, given the right spot. I could certainly find the house I grew up in, and haven’t seen in 25 years, without a map, for example.
I can also tell you about quite a few landmarks nearby. Viewing a neuron as functionally equivalent to a transistor appears to greatly underestimate what a neuron can do. Neurons compute, which means that the human brain has the equivalent of, not 80 or so billion transistors, but perhaps many billions of integrated circuits. And how much of that computing power is wasted, in terms of IQ? But even if the extreme skeptics are right, and neurons are like computers, all that does is extend the timeline. It’s still down to a matter of emulating a physical structure.
(And do so digitally/symbolically; it’s not like we’ll have to learn how to build finicky nanomachines in real space) We’re just talking about needing more computing power to emulate the brain. And even if they turn out to be more expensive than expected, they’ll be able to help us design more efficient machines. But I still don’t buy it. Computers are a hell of a lot more efficient than brains. There will very likely be a ton of hacks from computing and programming that we’ll be able to apply at the low level. And I think most of the value of WBE will be from emulating the higher-level structures. Re Fermi’s Paradox: it’s based on the assumption of abundant intelligent life in the universe, which I don’t think is a sound assumption.
Occam’s Razor suggests life that can contemplate spacefaring and interstellar communication is simply rare. Viewing a neuron as functionally equivalent to a transistor appears to greatly underestimate what a neuron can do. Neurons compute, which means that the human brain has the equivalent of, not 80 or so billion transistors, but perhaps many billions of integrated circuits. Everything we know about the universe suggests that organisms like humans are an extreme rarity. One accomplished by accident, with meat, via blind groping, over a relatively short period of time, considering.
What are the odds? If we buy into the “many billions of integrated circuits” theory, what does that crank those odds up to? One in a googolplex to the googolplex power? Mightn’t we just as well assume a million monkeys typing for a thousand years could produce Shakespeare? I’m very skeptical of the idea that intelligent designers can’t at least emulate that blind groping. Emulating is far easier than innovating. Speccy, you don’t believe in intelligent design, do you?
I mean, I could see the angles if I believed in intelligent design; God could of course make something that man could never emulate or begin to understand. One day, someone may figure out how the brain works. And some other day, someone may figure out how to make a machine that works like a brain. And on some other day someone may figure out how to make a machine that works like a brain at a cost that is comparable to that of a brain. And some day someone may build a mechanical brain that works better than a human brain.
But it’s gonna take a while. We can skip the part about figuring out how the brain works.
Figure out how a neuron works, yes, but figuring out the brain isn't needed. Then map the neurons of a brain, and recreate it digitally.
If WBE turns out to get to AI much faster than the top-down approach (the current programmers' approaches), then I could see learning how to properly tinker with the brain being a much bigger problem than emulating one digitally. Mixing-and-matching (this part of the brain from genius A, this part of the brain from genius B) doesn't sound to hard, though. I am siding with those who think that we will never fully understand consciousness. Philosophy existed for several thousands of years and barely managed to scratch the surface.
We do not know how to think about it and how to talk about it. Those who dare to talk about it like neurologists and AI thinkers simplify it to the point of triviality that no longer is recognizable by philosophers as an important question. When you approach the explanation from the side of AI you really can't find any reason for or benefit from something what we think as 'consciousness.' To be human means to vehemently insist that you are conscious (like CanSpeccy) just as that you have a free will.
Existences w/o the experiential conviction that one is conscious and has a free will does not seem possible. We can explain away consciousness by postulating that it is illusory. I think that neuroscientists are getting close to this point. By doing so they avoid dealing with really hard stuff that eluded the greatest philosophers.
As far as the complexity and structure of the brain is concerned there is one image in this presentation (linked below at Steve Hsu’s blog) that shows a tiny volume of mouse brain around an axon that took the scientist 6 six months to trace out (at 49 minutes). The tiny section he did is not the whole cell, but the little multicolored cylinder around the red axon. Artificial intelligence is one thing when just talking about some logic circuits and limited tasks, but emulating a brain is a whole ‘nother thing. It seems more reasonable to me that we might try to learn how to grow a customized brain in a machine long before we learn how to assemble one. If you’ve got an hour to burn the whole thing is an interesting presentation. As far as the complexity and structure of the brain is concerned there is one image in this presentation (linked below at Steve Hsu's blog) that shows a tiny volume of mouse brain around an axon that took the scientist 6 six months to trace out (at 49 minutes). The tiny section he did is not the whole cell, but the little multicolored cylinder around the red axon.
Artificial intelligence is one thing when just talking about some logic circuits and limited tasks, but emulating a brain is a whole 'nother thing. It seems more reasonable to me that we might try to learn how to grow a customized brain in a machine long before we learn how to assemble one. If you've got an hour to burn the whole thing is an interesting presentation. I am siding with those who think that we will never fully understand consciousness. Philosophy existed for several thousands of years and barely managed to scratch the surface.
We do not know how to think about it and how to talk about it. Those who dare to talk about it like neurologists and AI thinkers simplify it to the point of triviality that no longer is recognizable by philosophers as an important question. When you approach the explanation from the side of AI you really can’t find any reason for or benefit from something what we think as “consciousness.” To be human means to vehemently insist that you are conscious (like CanSpeccy) just as that you have a free will. Existences w/o the experiential conviction that one is conscious and has a free will does not seem possible.
We can explain away consciousness by postulating that it is illusory. I think that neuroscientists are getting close to this point. By doing so they avoid dealing with really hard stuff that eluded the greatest philosophers. 'We can explain away consciousness by postulating that it is illusory.'
We are the result of natural selection that allowed the more alert to survive and propagate. The foundations of consciousness are related to survival; the dangers are real and the neurophysiological responses to the dangers are real. The breathtaking complexity of human thinking is also real though yet poorly understood.
The neuroscientists are busy with learning, step by step, the neurobiological tangibles of consciousness, by using the reductionist models based on the ideas and enormous amount of information available to them thanks to the hard work of the previous generations of scientists. There are some awesome, brilliant people who are laboring in a field of cognitive sciences and who are expanding our understaning of the mind. Today, being a philosopher, in any area, without first acquiring the fundamental knowledge in the area of philosophizing is ridiculous. Oops, I meant to delete that comment, since I realized you already had added your own suggestion as to the nature of consciousness!
Still, having made a bad start, let me dig deeper. All I understand by consciousness is the subjective awareness of the state of my central nervous system. This is something impossible to share, since without a Star Trek 'mind-meld' it is experienced only by the brain that is aware of it. Richard Muller explains free will by supposing a spiritual world, i.e., the world of consciousness, which is entangled with the neurological world. Thus a decision in the spiritual world, i.e., an act of will, collapses the wave function linking the spiritual and physical worlds. However, as the spiritual world of the individual, that is to say his soul, cannot be examined except by the individual him/her/zhe/zheir-self the collapse of the wave function cannot be observed. Thus free will, to an outside observer looks like a random neurological event.
I think this explanation is amusing to play with and, much as I like much of what Richard Muller has to say, entirely useless. Obviously, there can be no free will since we will what we will for good or ill, and cannot will otherwise, for if Cain willed to kill Abel, how could he have acted otherwise than to go ahead and kill him? Could he, at the same time, have willed not to will to kill Abel? But if so, what if the will to kill Abel were stronger? Could he then have willed to will not to kill Abel more strongly?
This leads to an infinite regress. But perhaps I should read Paul McLean. Oops, I meant to delete that comment, since I realized you already had added your own suggestion as to the nature of consciousness! Still, having made a bad start, let me dig deeper. All I understand by consciousness is the subjective awareness of the state of my central nervous system. This is something impossible to share, since without a Star Trek “mind-meld” it is experienced only by the brain that is aware of it. Richard Muller explains free will by supposing a spiritual world, i.e., the world of consciousness, which is entangled with the neurological world.
Thus a decision in the spiritual world, i.e., an act of will, collapses the wave function linking the spiritual and physical worlds. However, as the spiritual world of the individual, that is to say his soul, cannot be examined except by the individual him/her/zhe/zheir-self the collapse of the wave function cannot be observed. Thus free will, to an outside observer looks like a random neurological event. I think this explanation is amusing to play with and, much as I like much of what Richard Muller has to say, entirely useless. Obviously, there can be no free will since we will what we will for good or ill, and cannot will otherwise, for if Cain willed to kill Abel, how could he have acted otherwise than to go ahead and kill him?
Could he, at the same time, have willed not to will to kill Abel? But if so, what if the will to kill Abel were stronger? Could he then have willed to will not to kill Abel more strongly?
This leads to an infinite regress. But perhaps I should read Paul McLean.
— Emmett Clayton, A common visual shorthand to indicate that a character is smart is to have them play,, or some similar game. After all, dialog can disrupt pacing, and not everyone is, but it's unobtrusively easy to insert a chessboard into a scene. The character doesn't even have to actually play it; simply lingering nearby with a concentrated gaze is enough to suggest deep thinking. Be sure to expect two intelligent, often leaders, talking about recent things to hit home. Of course, even if you've got two genius rivals playing, like and, expect the brainiest to dramatically declare '.checkmate!' While numerous pieces are left on the board, actually revealing (so much so they didn't even think to resign). While this is a popular trope with and the, it's not limited to them, and having a character (try to) play chess.
A variation is having a pair of idiots play with a chess set. Can be substituted depending on the setting (for example, Japanese media generally show cerebral types playing Go, whereas more 'hand-on' characters will play Shogi). To make it easier for the audience to identify with this trope, these games will be shown as very similar to chess, either by visual cues (checkerboard designs, chess-like pieces) or described outright as ' (Wizard chess, Vulcan chess, etc.).
If the normally very bookish character really, he might end up instead. This trope is often; the game relies heavily on strategy and forethought, so it tends to attract people who like an intellectual challenge.
It’s not a determiner, however, because the sheer amount of concentration required could make it difficult for the highest of IQs if they happen to See also,,,, and. And contrast. • In, this is used to establish tactical skill. • In an anime-original scene, Commander Dot Pixis is introduced playing chess with a nobleman and losing terribly. Then his subordinate arrive with news of the crisis in Trost, sending the nobleman into a panic. He demands Pixis stay with him, and protect his lands since he's not smart enough to even win a simple game of chess. One of the subordinates points out that Pixis always lost the games on purpose, to avoid offending his host.
He quickly proves himself a brilliant commander on the field. • Reiner and Bertolt are shown to play chess during their downtime.
Side material notes they are among the few people capable of providing a challenge to Armin, the resident tactical genius of the series. This becomes an important detail later on, when the Survey Corps are preparing to face Reiner and Bertolt in battle. Eren points out that Reiner always excelled as a strategist during training, making him a. •: Lelouch Lamperouge is introduced by having him win an unwinnable (in a certain time frame) chess game. Needless to say, he is the second smartest person in the entire world of the series. The smartest guy in the world is a brilliant chess player, too. • Usually the details of the gameplay are left in the background, but when they're not, well.in one game, Lelouch's opponent moves his king onto a square adjacent to Lelouch's king (an illegal move, since you can't move your king into check), and thus declares checkmate even though he doesn't think he has won.
He did this to goad Lelouch into taking his king with his own king, but Lelouch doesn't do it because a pawn is guarding the enemy king; the nonsensical things here are too numerous to enumerate. • The nonsense of the move was by Odysseus, who rolls his eyes and says 'Oh come on. That's just too much of a farce!' • Though it is worth pointing out that does take place in an alternate timeline, so it is possible that some rules of Chess might be different. • Ed from is either played straight or a subversion.
She is one of the best hackers in the solar system and can play a week long game of chess against a 96 year old master, but outside of that she is a who can barely stay focused on anything. Ultimately she seems like a: master at hacking and chess and terrible (or at least on another thought process) with everything else. • The aforementioned chess master comments that Ed is either an so she may just be; chess is about being unpredictable and if Ed is one thing, it's unpredictable. • Shikamaru Nara from plays shogi which is also known as Japanese chess.
It was getting beat all the time that his teacher Asuma learned that he was a lot smarter than he was letting on. • His father plays it better than him, and needless to say, he is also smarter than Shikamaru. • Hyper-intelligent Ami from plays chess, which is an important part of one episode where she plays against a villain who freezes her body more and more as she loses her pieces. • Kaname from the manga. • In, Hirofumi Koganei challenges several Seika Academy students to a game of chess to prove his superior intelligence, noting that he is the fourth best player in Japan. Takumi Usui him handily.
• — got him and his brother adopted by beating Gozaburo in a chess game. Gozaburo, on the other hand, was a Grandmaster, and not all-too smart at all. When he later confronts Kaiba at Duel Monsters, few fans would deny that his deck strategy was very poor. • In the manga version, Mokuba claims that he cheated. Still, that's hardly a reason to say Seto isn't smarter than Gozuburo. (Gozaburo based his whole life on cheating and lying; Seto was likely just better at it.) • The above is in of itself strange since there isn't a conventional way to cheat at chess.
In the English dub, Seto beat Gozaburo by studying all of Gozaburo's past matches and moves, allowing him to know the best way to defeat him. While Gozaburo did adopt Seto and Mokuba as agreed, Seto's planning and execution of moves impressed Gozaburo to where he made Seto his heir to the Kaiba Corporation. • In, a heated match ended in 1 win for Mustang, 97 losses to Grumman, and 15 draws. Grumman and Mustang are both shown to be cunning strategists, with Grumman, in his capacity as, having a big impact on the final arc.
Breda and Falman also have signs of this. • The English dub of explains that one of Ken Ichijoji's many genius-level talents is 'playing a single game of chess while everyone watches.' Note The clip is set in a park with a ring of chess tables. Ken is, as the narration states, playing one game, while the single occupants at all the other tables turn to watch. • In the evil mastermind of the series, is often time seen playing chess while imagining that talk to him. Needless to say, he's one of the brightest people in the show.
As for his ghostly 'opponent'? It's Rau Le Creuset, of the previous series, and one of the few people capable of checkmating Durandal, both morally and philosophically. • In, Orihara Izaya is far too smart to play mere chess. He instead plays which uses various gamepieces from chess, Go, and several other games.
• Played with in: Hyperintelligent Inspector tells some subordinates to not 'waste time with such a boring game.' In the manga, he says so. Right after showing the winning move to one of them. • Invoked in with Yang Wenli, who proves himself time and time again to be one of the smartest and deadliest men alive and occasionally is seen playing chess. Inverted in that he kind of sucks at it.
•: Akisame is revealed to be a master Othello player, among his. He claims that he has never lost a game in his life. In this case we already knew Akisame was smart, this just reinforced the impression. • While establishing Lupin's character in, he and Inspector Zenigata play over the phone. Naturally, Lupin wins by having one of his pieces disguised as one of Zenigata's.
•: Shiro, the 11-year-old genius, beats the best chess A.I. 20 times in a row. She then later beats God in chess. • In, the (minor) reveal that the headmaster plays Go with Evangeline has a minor storytelling significance: one of the highest marks of superior skill in Go is not to beat your opponent so much as to control the game's outcome without your opponent realizing it. Which often means playing the.
• In, since most of the characters are in military, they often engage in a chess game. Ikta, the main character, tends to lay waste upon most of his opponents. • Averted in, Carol says she's good at Othello (AKA Reversi).and then proves it by playing a perfect game against both her friends, beating Tomo in five minutes and Misuzu in ten. This is just one of many indications that.
• is often shown playing chess in his various incarnations. • Lex Luthor's introduction in has him winning fourteen simultaneous games of chess on his coffee break, while also reading Machiavelli in the original Italian and teaching himself Urdu by tape 'to keep my mind occupied'. He also only becomes truly obsessed with defeating Superman after Bizarro (a Superman clone created by Luthor in this universe) beats him at chess. • In pre- days, Superman kept a giant chess-playing robot in the Fortress of Solitude that could play at super speed. • of is frequently shown playing chess. • has the Daughters of Amazon led by Victoria, a master of chess. • In an issue of the '70s version of, Timber Wolf (the team's feral member) is seen playing a game of chess.
He loses, and he complains he was just about to use his secret tactic: kicking over the table! • Lampshaded and subverted in an issue of. Facing a test of cunning set before him by a sorceress, Hercules examines a chess-like layout, then smashes the whole thing apart, claiming the answer was that the only way to win was to change the rules (and referencing the while he did so). The sorceress applauds him, even as her advisor points out that all he had to do was move one of the rooks. (She was, so some leeway isn't surprising.) • In a related vein, one issue of shows Herc's ally Amadeus Cho — described as the 7th smartest person in the world (Herc fans suspect Cho might deserve a higher ranking) — defeating at chess. • Obadiah Stane, enemy, was pretty chess-obsessed, extending the metaphor to his mooks he employed.
The movie gives him a pretty neat set to toy around with. • Taking this trope, one scene in The Invincible has Tony Stark and Reed Richards playing each other on about ten different chessboards at the same time.
• Taken to extremes in. Suenteus Po, an old wise philosopher, has grown so weary of the world that he hides in his small apartment and plays chess against himself. All of which seems to have been a way to protect his secrets from the, who can read minds. When she tries to read Po's mind, she sees chess.and nothing else. •: Skalman plays chess - generally against himself, since other people aren't much of a challenge. • Inverted in: the Roger is almost always clueless, and he's the only one in the family that enjoys chess.
Jason, the smartest of the family, only plays when Roger ropes him into a game, and wins in three moves. The rest of the family seem to be reasonably talented; they just devote their skills into losing to Roger as quickly as possible rather than indulging him in multi-hour chess sessions (since beating him would lead to him begging for a rematch).
• and Doctor Doom can play a game of chess in their heads, while wandering Doom's castle in Latveria, while having various other deep discussions, with some besides (i.e. Doom launching an attack on the other three with Reed having set some countermeasures in motion). • In the mini-series '1-2-3-4' by, Doctor Doom engaged Reed in a form of 4-D chess with an alien computer called the Prime Mover, manipulating the minds and emotions of Reed's teammates in order to destroy them. Reed realized that Doom's gambits were rigid and clumsy and was able to out-think him by being more flexible in his playing. Literally, as it turns out, as he used his elongation powers to add new structures to his brain. • In an issue of Justice League, Mr. Terrific plays two games of chess against Red Arrow and Black Canary. Download Overlays Free more.
• In The, Professor Xavier occasionally played chess against some of his students. Hank McCoy and Kitty Pryde have been known to beat him on occasion.
• is quite intelligent and highly-educated, and is also known to play chess. Laura claims that she never loses when beginning a match against during her solo series. • Odin in loves chess (despite the anachronism) and can almost constantly be seen playing it against his advisor Mimir when he's not taking an active affair in things. Subverted in that he always loses, and often that the results (and his ensuing attempts to weasel out of them by cheating) fall under.
• The Riddler is shown, in one, walking past a group of chessplayers and predicting the outcomes of three games in as many seconds. • In, the titular character is a very skilled player. Even the resident scientific genius, Professor Hermelin, has been beaten repeatedly by him, much to the latter's annoyance. Ric', 'Le Bourreau', is also a very cunning player, although he doesn't hesitate to cheat or play unfairly. • In Big Nate, the title character averts this, being his school's top chess player.
His friends, avid believers in this trope, are clueless as to why he's so good at it. • Though played straight with Nate's best friend, who used to be the best before losing to Nate. (Though he is now 4th best since he lost to Gina, making her another straight example.) Artur plays with this.
While he is pretty smart, he is also a bit eccentric due to being a. Nonetheless, he consistently beats Nate, something which is one of the main reasons why Nate is bothered by him (the other being Artur is with Jenny and Artur being very lucky compared to Nate.). • The fanfic features two chess matches between Sunset Shimmer and Twilight Sparkle, the two smartest girls in their class. The first time, Twilight and concedes, which enrages Sunset, who wanted to soundly beat Twilight. The second time, Twilight wins, though that's only because Flash Sentry is distracting Sunset. • from often does this. • Related to the Justice League entry below: • One fanfic had a twist: and had some time to kill during a mission, but did not have a chess set.
So, they just announced their moves, and simply kept the board and the positions of all the pieces memorized through the entire game. • Averted in, where we see a chess-game played between the Heavy and the Pyro, typically seen as the two dumbest members of the team. The Engineer, the most intelligent member with 11 separate degrees, prefers checkers. • In, Karl, the brains of the group, not only plays chess, he invented a holographic chess set based on the one in (which is being marketed by The Noble Collection and will be available for the holiday season, the narrative claims). In one chapter, he plays chess with while they are waiting for the results of a test, but Jalal wins.
(Karl later comments that nobody can beat him at it, but then again, Jalal has been playing • Played with in Escape From The Hokage's Hat. Tsunade brings up this trope in reference to Shikamaru and then has Naruto play checkers. When Naruto then asks why checkers instead of chess, she explains that it fits his fighting style (of spamming and working with the clones) since all the pieces share the same value but they only become dangerous if in the right position and gives Naruto practice in directing clones. • In, this is used to set up Celestia's intellectual prowess, particularly in relation to Luna. • In, Tom establishes his intellectual dominance on the train to Hogwarts by winning a chess match against Archibald Aardwolf with a. • In the fanfic To live again, Gideon and Annie regularly play chess, and Annie even manages to beat him sometimes (after years of practice, but still). Gideon is canonically a mastermind and a great profiler and this scene (in the prologue!) shows Annie's intelligence too, foreshadowing the first chapter's events when she connects the dots and uncovers the Replicator's true identity.
After that it's not too surprising when we learn that she's actually capable of profiling a criminal if the situation calls for it despite never taught to do it - but hey, growing up in the BAU has some effects besides becoming an emotional mess. • In the fanfic series, Thrawn and his wife play a Chiss board game called wei-jio that seems to be a variation on the game Go with a piece-capture goal similar to chess. Thrawn being, he's very good at it. • Subverted in: The main character, Roger Hackett, is learning to play chess and is pretty smart. But he's pretty mediocre at it, to the point that his opponent decides to let him win a few times in order to prevent a permanent. Then again, said 'opponent' is HAL 9000, and it turns out that Roger is only trying to learn in order to make HAL happy (because it's the computer's favorite game).
•: Zig-zagged. Vice Admiral Jonathan and Robin both play chess and play well, but Cross admits to being a beginner at best..
• During the Matchmaker scene in Disney's, the heroine briefly passes by a game of and quickly makes a move on behalf of one stumped player. His reaction indicates it was very successful. • In, The Moochic and his rabbit assistant Habbit are seen playing. • Somewhat subverted in where Derek and Bromley play chess in one scene. Derek, while not dumb, is relatively simple-minded and Bromley actually loses while cheating. As the title character is walking across a chessboard, he stops to shove one of the pieces into a checkmate position.
•: features R2-D2 and Chewbacca playing holographic chess ('dejarik') during the trip to Alderaan, suggesting R2's intelligence, Chewbacca's temper, and C3PO's timidity. And an of Chewbacca's. It's only later that we see him doing starship repair and rebuilding destroyed protocol droids. • Night Train to Lisbon (2013) opens with a scene of a man (played by Jeremy Irons) playing chess with himself. We soon learn that the character, Raimund Gregorius, is a lonely university professor. •: Xavier and Magneto in the, and, and alluded to again in at the very end where Erik is at a park with a chess board. The chess motif is there to establish the attitudes of both men as, and it's a metaphor for their struggle over the future of mutantkind.
• And subverted in real life. Everyone on the set naturally assumed that the erudite and knew how to play chess, but neither of them did. As Stewart explained, he was always too busy with his career. They had to be taught by a world champion; Stewart said it was 'like learning to drive with.' • In, it was more like discussing with a chess table between Charles and Erik, without playing much. The lack of play and banter almost seems to symbolize the extreme distance and hostility (perhaps the worst in the series) between them, including Erik's violent outburst just minutes earlier.
• Kronsteen in is an actual chess grandmaster as well as being SPECTRE's chief strategist. His introduction shows him defending his title as champion of Russia when SPECTRE (SMERSH in the book) calls him into the meeting; he delays long enough for his opponent to run out of time before heading off. • In, there is a scene where Slevin and the Boss discuss how Slevin will kill the Rabbi's son, interposed with a scene where Goodkat tells the Boss how he can manipulate Slevin into performing the murder, all while playing chess. • The Oliver Parker film adaptation of has Iago (played by ) illustrating his plan with an actual chessboard. Eldon Tyrell and J.F. Sebastian (one of Tyrell's genetic designers) regularly play chess as an indication of their intellects.
The replicant Roy Batty tricks his way into Tyrell's presence by feeding Sebastian chess moves that beat Tyrell — indicating Batty's intellectual superiority. • Early in, Chekov and Terrell stumble on what's left of the Botany Bay on Ceti Alpha V, which is being used as shelter for. One of the items they see is a chess set, which isn't surprising for someone as as Khan. It's worth noting, however, that it's a regular 2-D chessboard and not the 3-D setup that Kirk and Spock are used to.
It serves as that despite Khan's intelligence, he's sorely lacking in experience and three-dimensional thinking compared to Kirk. • During Spock's memory test in, he is shown playing spherical chess on a computer screen.
Given that it is Spock, the computer stood no chance. • In, Sheriff Langston plays chess with himself, showing that he is intelligent and. Specifically, there is a deputy sitting opposite him at the chess board; Langston makes a move, and then stoically turns the chessboard around so that he is now playing the opposite side's pieces every move. • Famously parodied in. The two title characters die and meet the Grim Reaper, who offers the traditional ' challenge. Being that they are, Bill and Ted proceed to play and beat Death at Battleship, Clue, Twister, and other (less cerebral) games. • Subverted in.
After a particularly devious play in their campaign to create a fake war, the film producer remarks to the spin doctor, 'I'll bet you're great at chess.' The spin doctor replies, 'I would be, if I could remember how all the pieces moved.' • has characters incarnated by Faye Dunaway and Steve McQueen play sexy chess prior — until he suggests they 'play a different game.' Peel and Steed play a game of chess. Peel has been portrayed as a genius up to this point, and she plays from memory and handily defeats Steed to show her intellectual superiority.
• In, Max and his mentor play Go, which factors into several mathematical and visual motifs. • The in is shown teaching his son Go. • In, the villain interrupts a chess game between two random strangers playing by the street, and defeats the other player in four or so moves. • has Julius and David Levinson playing chess together early on, with David winning easily. He spends much of the rest of the movie talking in. • in, as Bart and the Waco Kid build their friendship by playing chess.
While neither man is particularly smart, compared to the other characters in the film. • introduces professor Hector Hammond, zoologist and alien examiner, by having him play chess over the internet. • In, the genius John Nash is seen playing with another really smart guy. When John lose, he have an emotional reaction that is easily mistaken for being a. However, it's actually the beginning of a revelation that will eventually land him a Nobel Prize. • In, Kevin Flynn is a wise who is said to often play.. • In, Holmes and Moriarty play a game of chess during the climax, which acts as a metaphor for the events taking place inside the peace summit (with their respective representatives as the pieces).
The pair are so smart that by the end it doesn't even stay on the board; they simply call out the moves in the middle of their ongoing conversation and Holmes successfully checkmates Moriarty both in the game and in his plot. • Inverted in Bad Company, in which Chris Rock's character is adept at chess. He's street smart, but not book smart.
• In, Wynn is prominently shown playing chess with Dodd and beating him at every turn to show off his advanced mental faculties. • In the film version of, Light and L play a brief game against one another, while holding a conversation about L's suspicions that Light is Kira. Light wins, to which L responds with a deadpan 'Impressive.' •: Backgammon variant, in the cave. Using (like nuts and bolts), presumably from. • From, after the two of them watch the love interest head off to join the title character. Professor Hieronymous Grost: 'I play chess.'
Doctor Marcus: 'And I have a bottle of very good wine, tucked away for a rainy day.' Grost: *Glances up at the cloudless sky* 'It's pouring.'
• opens with Detective Joseph Thorne playing speed chess against a friend of his to show his intelligence. On top of that he has a phone conversation halfway through without interrupting his game and goes right back to playing a basketball game after he trumps his opponent. Forbin is playing against Colossus using a stylized chess set while his colleagues try to shut down the with a. Unfortunately by that stage it overcomes the attempt in a few seconds, while simultaneously completing the chess move with obvious.