Month: January 2018

Ep 120: Long short-term memory

Ep 120: Long short-term memory

Long short-term memory

In episode 117, I expressed some concern. It seemed that neural network implementations lacked a way of holding onto information over time. It turns out that the problem has been addressed by recurrent neural networks. Recurrent networks remember, though not very well. Today, we look at a modification of recurrent networks that allow artificial neural networks to remember much more, for much longer.

Here is one of the best videos I’ve ever seen for explaining how a neural network functions, that explains how a long short-term neural network works.

Recurrent Neural Networks (RNN) and Long Short-Term Memory (LSTM)

Here are a couple of articles on long short-term memory neural networks.

Understanding LSTM Networks

Recurrent Neural Networks Tutorial

Ep 119: Robotic dreaming

Ep 119: Robotic dreaming

Robotic dreaming

When you are awake, the world comes in at you through your senses. When you are asleep and dreaming, you create a world from within. An algorithm for deep learning, called “the wake sleep algorithm,” seems to capture this behavior.

I referenced the previous episode in this one, so you may as well have a link to it.

Ep 118: Sleep and dreams

Here’s a link to a 13-minute, jargon heavy lecture on the wake sleep algorithm.

Lecture 13.4 — The wake sleep algorithm

I love the smell of code in the morning

I love the smell of code in the morning

I need to make a couple of notes before I forget.

I went through and put in code to set b to 1 or -1 on the methods that needed it. I was going to have write and paste, both inner and outer return a -1 if they were called while the buffer was empty. However, I changed my mind. Instead, the outer commands return -1 when a figure tries to write or paste to itself with the outer head. The same thing will happen with read and cut. As it happens, that means that inner write and inner paste never set b to -1. I went ahead and had them set b to 1, just for the sake of consistency.

Meanwhile, in move inner, b is set to -1 if the method tries to move the inner head further than it can go, and otherwise sets b to 1. That gives a figure a way to tell when it has reached the top or bottom of the figure. Read commands also set b to -1 if a read command is sent while the head is at the end of the figure, one spot further than there are numbers to be read.

I need to make sure that any outer commands, like read cut write or paste, set the otherHead field of the figure being read from or written to, to be the address of the figure that is doing the reading or writing.

I’ve still got several outer head commands to implement, and I’m way behind on comments. I actually think I’ll go ahead and finish implementing the remaining ports before I catch up on the docs. It shouldn’t, judging by the methods I’ve already created, take too long. With a combination of cut and paste, and find and replace, I can adapt the methods I’ve already implemented to do what they do, only to another figure, instead of the one that is calling the outer head ports.

I have no clue how long catching up on the documentation will take, but I don’t want to go further without getting that done. There’s just too many little details that could get lost if I don’t take care of it soon.

Meanwhile, last night, I tested setting one of the slots in the realm’s population array to null. It worked. I ran it a couple of times, but stopped when I realized that I was making orphans. The one figure would write out a child copy of itself, add a slight mutation, and then the parent figure got deleted. The system is very far from creating anything that would count as living, but it still made me feel just enough guilt to make me stop screwing around and move on.

Okay, that’s where I am, and where I’m going.

I got the addresses handled in what’s there so far. I made sure the outerWrite command set the target figures’ otherAddress field to be the address of the source figure.

While I was at it, I changed some of the test code messages, so directions up and down are reversed from before. I just picture 0 at the top of the array, so adding is going down to me.

When a method needs to see if the outer Head is pointing at the source figure, it should test the addresses. It was using the name field, but I might switch that to a string or some other object. I changed that too. That’s why I did all this in the morning; I had too much that I might forget to do if I didn’t get it done.

If x==address

Got an episode to get done.

Mutant bouncing baby bits

Mutant bouncing baby bits

It’s Saturday night, and I’ve got some coding to do. I’d really like to get a figure to self-reproduce tonight, but I don’t know if I can pull it off or not. Last time, in the middle of creating the realm, I realized that there was and is some stuff that needs doing with the ports and methods I’ve already created. The figures could use some feedback on how a given operation has worked.

It’s really easy to setup. I can set b to zero or less for one result, one or higher for the other. It actually doesn’t matter, since this happens from a set command. The value I give to b will only be used to choose one or the other branch, and won’t be saved. I’ve already got the test code in place, and moved b to a global scope for the Figure class. I’m thinking of moving the other internal values for what is, at the moment, the run method to the global scope as well. Right now, it’s all about the set method, but I have no idea what future implementations or extensions might need or want to do. I think it’s best to go for maximum flexibility, especially for the node system which I’ve yet to talk about, let alone implement.

Ports are used for special commands, Nodes are used to add new ports and abilities. The goal is to create a tunable emergent system. I don’t just want digital life; I want digital life that can solve problems and do tasks for me. It’s artificial life that is also artificial intelligence.

Read More Read More

Ep 118: Sleep and dreams

Ep 118: Sleep and dreams

Sleep and dreams

There are two types of sleep: rapid eye movement or REM sleep, and non-rapid eye movement, or non-REM. Dreams happen during both types of sleep, and there is a well-established link between the amount and quality of sleep you get, and how well you recall and/or learn. Today, we take a little peak at what happens in the brain while you sleep and dream.

Here’s a link to a panel discussion on sleep and dreams. The part I talk about in this episode starts at roughly 22 minutes and 22 seconds in.

The Mind After Midnight: Where Do You Go When You Go to Sleep?

Here are a couple of articles about the studies done with rats and their dreams.

Rats May Dream, It Seems, Of Their Days at the Mazes

Rats dream about their tasks during slow wave sleep

Here’s a link to an article about memory, and the types of dreams that occur during REM and non-REM sleep.

Memory, Sleep and Dreaming: Experiencing Consolidation

Ep 117: Sleep, reset and brain wash

Ep 117: Sleep, reset and brain wash

Sleep, reset and brain wash

While you are sleeping, your brain performs a reset of sorts. Synaptic weights that increased over the course of the day decrease while you are sleeping. At the same time, the fluid your brain floats in, rushes through your brain tissue, clearing out wastes that couldn’t be removed over the course of the day.

Here’s a video and an article about how wastes are cleared away during your sleep.

One more reason to get a good night’s sleep

How Sleep Clears the Brain

Here’s an article on the link between synapse size and synaptic weight—the strength of the signal that comes from a given synapse.

The Secret to the Brain’s Memory Capacity May Be Synapse Size

And here’s an article about how the size of synapses shrink during sleep.

How Sleep Resets the Brain

Ep 116: Bit seat drivers

Ep 116: Bit seat drivers

Bit seat drivers

Deep learning algorithms, and neural networks in general, require much more training than humans do. They are unable to generalize well enough to handle situations not covered in the training data, and can be thrown off by things that a human wouldn’t even notice. Today we look at these challenges by examining what it takes to train a neural network to drive a car.

Here are a couple of links about training self-driving vehicles.

Edge case training and discovery are keeping self-driving cars from gaining full autonomy

Training AI for Self-Driving Vehicles: the Challenge of Scale

Here’s a short video demo and an article about how AI image recognition can be fooled by things that wouldn’t fool many animals.

Adversarial Patch

Google ‘optical illusion’ stickers make AI hallucinate

Ep 115: do we need something else, or just more?

Ep 115: do we need something else, or just more?

do we need something else, or just more?

Though deep learning has had some promising results, there are still some things that it simply doesn’t do well at. There are other algorithms that do as well or better at certain tasks. On the other hand, we’ve only been able to implement comparatively small neural networks. Perhaps, if we could simulate larger networks, deep learning or an algorithm like it could do what it currently cannot.

Here’s a link to a paper by Gary Marcus, providing a critical review of deep learning and suggesting that it may have to be combined with other approaches to create a general intelligence.

Deep Learning: A Critical Appraisal