Category: Uncategorized

Trembling on the verge

Trembling on the verge

I’m about to archive the current state of this project as figures0.1. 0.0 was just far enough so that a program could reproduce itself, and add a slight mutation. This version has every last port implemented. That is, the little programs, or figures, can do everything I want them to be able to do.

They can read or write from themselves and one another. They can write random numbers that are safe. The random values produced are all valid addresses or ports.

They can also cut and paste to and from themselves and each other. Pasting is like writing, only the values are added, inserted into the place they are writing to, instead of being written over what’s already there. Cutting is like reading, only the elements being read are removed from where they are being read from.

The very last thing I had to test was the jump command, port -50. If a program reads, writes, cuts, pastes, or even moves inside of another figure, that figure gets a port that connects to the figure that did the reading, writing, or what have you. This gives the figures a way to react to any figure that is doing something to it. The only way to escape from the retribution is to change where you’re living, jump.

Here are a couple of figures. The first one just moves it’s read write head inside of the second one. The second one then cuts out the entire memory of the first figure, effectively killing it.

First figure: {P.N_THREE, P.MOVE_OUTER, 3, 0, P.FIND_EMPTY, 6, /*0, P.JUMP,*/ -1, -1, -1, -1, 1, 2, 3, 4, 5}
Second figure: {P.OTHER_SIZE, P.OTHER_BOARD, -1, -1, -1, -1}

Here’s what happens if we run them.

The size of the realm=3
Before:
slq buffer size=0
slq2 buffer size=0
Figure 0 at 0 memory={-3, -32, 3, 0, -17, 6, -1, -1, -1, -1, 1, 2, 3, 4, 5}
null
Figure 1 at 2 memory={-38, -45, -1, -1, -1, -1}
After:
slq buffer size=0
slq2 buffer size=15
Figure 0 at 0 memory={}
null
Figure 1 at 2 memory={-38, -45, -1, -1, -1, -1}

In that run, the jump command is commented out. If we uncomment it…

{P.N_THREE, P.MOVE_OUTER, 3, 0, P.FIND_EMPTY, 6, 0, P.JUMP, -1, -1, -1, -1, 1, 2, 3, 4, 5}

… the first figure is able to save itself.

The size of the realm=3
Before:
slq buffer size=0
slq2 buffer size=0
Figure 0 at 0 memory={-3, -32, 3, 0, -17, 6, 0, -50, -1, -1, -1, -1, 1, 2, 3, 4,
5}
null
Figure 1 at 2 memory={-38, -45, -1, -1, -1, -1}
After:
slq buffer size=0
slq2 buffer size=0
null
Figure 0 at 1 memory={-3, -32, 3, 0, -17, 6, 0, -50, -1, -1, -1, -1, 1, 2, 3, 4,
5}
Figure 1 at 2 memory={-38, -45, -1, -1, -1, -1}

Okay then, figures0.1 is done. I’m glad, as the next bits of development will finally start to get into the fun stuff.

Ep 126: How to have a lucid dream

Ep 126: How to have a lucid dream

How to have a lucid dream

Ask yourself, right now, “Am I dreaming?” Do you know how you got where you are? Does what’s around you and what you’re doing make sense? Do letters or numbers look right, or do the figures wriggle around and change while you’re watching? Does gravity work like it should, or can you float, or even fly?

Ep 125: Dreams and memory

Ep 125: Dreams and memory

Dreams and memory

Do you remember your dreams? Everyone dreams, but not everyone regularly recalls them. If you wish to explore your dreamscape, you’ll need to remember what dreams may come. You could have the most inspirational, vivid, detailed dream imaginable, but if you can’t remember it, it doesn’t much matter. Today, we look at ways you can improve your dream recall.

Ep 124: Ride the nightmare

Ep 124: Ride the nightmare

Ride the nightmare

If you have nightmares, and you wish to experiment with lucid dreaming, you’re rather lucky. It’s fairly easy to recognize nightmares while they are happening. It often happens to people spontaneously, even if they’ve never heard of lucid dreaming. It happened to me.

Ep 123: A tale of two books, part 2

Ep 123: A tale of two books, part 2

A tale of two books, part 2

One of the things I’ve always had available for experiment is my own mind. Whether it’s mnemonics to improve my memory, or strange mental exercises to induce a lucid dream or an out of body experience, I’ve spent decades plumbing the depths of my internal ocean. Today, I tell the story of the second book I bumped into as a child, and the strange quest it began, even though it wasn’t an especially good book.

And the baby has a baby

And the baby has a baby

When I started posting about this project, I just dove in. Eventually I’ll have to lay out exactly what I’m working on, and how I’m approaching the development of my artificial life forms. That will take some time. For now, have yet another post wherein I entirely fail to explain what’s going on.

Over the weekend, I finished coding the Figure class. It only took 1500 to 1700 lines of code, depending upon how you count and what you count. I still have much testing debugging and documentation to get done before I can finally move on to the interesting parts.

A few days ago, while testing one of many little pieces of the project, I saw the first little program that was produced by the system, instead of hand written by me, reproduce itself, I’m going to paste in the text from my journal. Note that each program copies itself, and adds a small mutation to the end of the child program. The longer the program, the younger it is.

Friday January 19, 2018

2:18:AM

First time I ran a figure I didn’t write.

The size of the realm=3
looking for neighbors.
Going up.
nobody new around.
Reading from out there.
Running baby.
looking for neighbors.
Going up.
party at 0!
Reading from out there.
Figure 0 memory={-4, -17, 3, -9, -13, 6, -8, -19, 9, -3, -19, 12, -6, -18, -1, –
1, -1, -1}
Figure 2 memory={-4, -17, 3, -9, -13, 6, -8, -19, 9, -3, -19, 12, -6, -18, -1, –
1, -1, -1, 3, 17, -14, 18, 15, 22}
Figure 1 memory={-4, -17, 3, -9, -13, 6, -8, -19, 9, -3, -19, 12, -6, -18, -1, –
1, -1, -1, 3, 17, -14}

See that? Baby had a baby!

Ep 122: A tale of two books, part 1

Ep 122: A tale of two books, part 1

A tale of two books, part 1

After over 120 episodes, I thought it might be about time I introduce myself. So, today we have the story of how I fell in love with science.

In this episode, I quote Walt Whitman’s poem “O Me! O Life!” with the line: “That the powerful play goes on, and you may contribute a verse.” Here’s a linked to the entire poem.

O Me! O Life!

Ep 121: Neural Turing machines

Ep 121: Neural Turing machines

Neural Turing machines

Traditional programming methods are very good at solving problems that have simple rules to apply. They’re not so good when there are no simple rules that can be used, or when the rules are unknown. Neural networks are very good at problems that have complex or poorly defined rules, but not so good at simple rules like if, then. With traditional computing on the one hand, and neural networks on the other, each one good at what the other is bad at, perhaps they should be somehow combined.

Here’s a video on Neural Turing machines.

Neural Turing Machines: Perils and Promise

Here are the episodes that were referred to in this episode.

Ep 108: Socrates is not a woman

Ep 110: Better and better

Here are a couple of articles on Neural Turing machines.

Neural Turing Machines | the morning paper

Neural Turing Machine

Ep 120: Long short-term memory

Ep 120: Long short-term memory

Long short-term memory

In episode 117, I expressed some concern. It seemed that neural network implementations lacked a way of holding onto information over time. It turns out that the problem has been addressed by recurrent neural networks. Recurrent networks remember, though not very well. Today, we look at a modification of recurrent networks that allow artificial neural networks to remember much more, for much longer.

Here is one of the best videos I’ve ever seen for explaining how a neural network functions, that explains how a long short-term neural network works.

Recurrent Neural Networks (RNN) and Long Short-Term Memory (LSTM)

Here are a couple of articles on long short-term memory neural networks.

Understanding LSTM Networks

Recurrent Neural Networks Tutorial

Ep 119: Robotic dreaming

Ep 119: Robotic dreaming

Robotic dreaming

When you are awake, the world comes in at you through your senses. When you are asleep and dreaming, you create a world from within. An algorithm for deep learning, called “the wake sleep algorithm,” seems to capture this behavior.

I referenced the previous episode in this one, so you may as well have a link to it.

Ep 118: Sleep and dreams

Here’s a link to a 13-minute, jargon heavy lecture on the wake sleep algorithm.

Lecture 13.4 — The wake sleep algorithm