When you are teaching an AI to solve a problem or do some task, you usually have to provide the right answer. However, sometimes, the AI can learn by itself, without being told what the right answer is.
Computers are being used to design computers. The better our computers and their tools get, the better computers and tools they can produce. But it wasn’t always an easy path. Today we look at the VAX9000. It used a system called “SID” to generate part of its design. SID was an expert system, and it was outperforming the human engineers, some of whom refused to work with it. The company that created the VAX9000 didn’t do well, and was eventually acquired by Compaq, after divestment of its major assets.
Here’s a video Digital Equipment Corporation produced for its sales department in 1991, 7 years before the company failed.
Computers use Boolean logic. Everything is true false, yes no, zero and one. There are plenty of situations when a simple yes or no won’t cover it. To get a computer to handle those situations, one can use fuzzy logic. Today, we have an informal experiment that shows why fuzzy logic is needed for even simple things.
Evolutionary approaches, genetic algorithms, and neural networks aren’t the only approaches to creating artificial intelligence. Today, we look at one of the early and rather successful approaches—expert systems.
Ep 107: Take heart, yee robots shivering in the cold
In the past, new methods of creating an artificial intelligence have garnered interest and enthusiasm. Then, when the over optimistic forecasts fail, nearly all funding and research grinds to a halt. It’s called an AI winter. Despite such setbacks, the general trend has been toward increasing ability and complexity within AI systems. Spring is coming, and maybe, it’s already here.
There won’t be posts after this one on this thing for a couple of weeks. I must navigate the dangerous, relative infested waters of the holidays.
I need to be able to show that the way we program our machine can be made to run any program, at least in principle. On the other hand, I’d very much like to switch gear, and get to talking about gear soon.
Here’s one more post on using subleq. I think that will do for now. …
In 1957, Frank Rosenblatt came up with the perceptron. The perceptron is a simple neural network that was able to recognize simple shapes. Unfortunately, Rosenblatt got a little over excited, and made over inflated statements about what his perceptron would be able to do. After the 1969 publication of Marvin Minsky and Seymour Papert’s book, “Perceptron,” which debunked many of Rosenblatt’s claims and pointed out some of the inherent limitations of the perceptron algorithm, interest and funding for neural networks dropped drastically.
Here are a couple of articles about the perceptron and the early history of neural network design.
If you look at only one neuron—one brain cell—how it behaves is actually fairly simple. Today, we cover the relatively simple basics of how neurons work.
In the early 1990/s, a biologist named Thomas Ray created a computer program that acted like a computer infected with many little programs. He called it Tierra, Spanish for “Earth.” The little programs could, and did, mutate, self-replicate, and evolve in strange and wonderful ways.