Professor: Jeff Orchard | Lecture Videos

Hodgkin-Huxley Neuron Model

Goal: To see the basics of how a neuron works, in the form of the Hodgkin-Huxley neuron model.

A neuron is a special cell that can send and receive signals from other neurons. A neuron can be quite long, sending its signal over a long distance; up to 50m long. Most are shorter.

Middle of a neuron is the soma (body). The long stem is the axon, and the signal travels up the axon. Dendrites collect electrical signals from other neurons and send toward the soma.

Ions are molecules or atoms in which the number of electrons does not match the number of protons, resulting in a net charge. Several ions float around in cells. The cell’s membrane, a lipid bi-layer, stops most ions from crossing. Ion channels are embedded in the cell membrane can allow ions to pass.

Sodium-Potassium Pump exchanges ions inside the cell for ions outside the cell. This causes a higher concentration of outside the cell, and higher concentration of inside the cell. Also creates a net positive charge outside, and negative charge inside.

This difference in charge across the membrane induces a voltage difference called the membrane potential.

Neurons have a peculiar behaviour, they produce a spike of electrical activity called an action potential. The electrical burst travels along the neuron’s axon to its synapses, where it passes signals to other neurons.

Hodgkin-Huxley Model

Model of an action potential spike based on nonlinear interaction between membrane potential and the opening and closing of and ion channels.

Let be the membrane potential. A neuron usually keeps a membrane potential of around .

The fraction of channels that are open is , where

Dynamics of opening and closing depends on the voltage.

The fraction of ion channels open is , where

The two channels allow ions to flow into and out of the cell, inducing a current, which affects the membrane potential, .

This system of four differential equations govern the dynamics of the membrane potential.

Simpler Neuron Models

Goal: To look at other, less complicated neuron models.

The HH model is already simplified. A neuron is treated as a point in space, conductances are approximated with formulas. Only considers , and generic leak currents.

But to model a single action potential takes many time steps of this system. Spikes are fairly generic, and the presence of a spike is more important than its specific shape.

Leaky Integrate-and-Fire (LIF) Model

The LIF model only considers the sub-threshold membrane potential, but does not model the spike itself. It records when a spike occurs.

So voltage can be modelled as

We can change variables

Then, if . And is the threshold.

We end up with

We integrate the differential equation for a given input current until reaches the threshold value of 1. Then we record a spike at time .

After it spikes, it remains dormant during its refractory period. Then it can start integrating again.

LIF Firing Rate

If we hold the input constant, we can solve the DE analytically between spikes.

Claim:

Proof: Plug in solution to the differential equation and show that LHS = RHS.

This solution approaches asymptotically. The interspike interval is the time between two spikes. It is split up into two parts, the refactor period, and the time it takes to go from .

It can be shown that the steady-state firing rate for a constant input is

A tuning curve tells us the spikes per second for .

Sigmoid Neuron

The activity of a neuron is low when the input is low, and activity goes up and approaches the maximum as the input increases.

This general behaviour can be represented by a number of different activation functions.

Logistic Curve

Arctan

Hyperbolic Tangent

Threshold

Rectified Linear Unit (ReLU)

Multi-Neuron Activation Functions: Some activation functions depend on multiple neurons.

SoftMax

is as probability distribution, so its elements add to 1. If is the input to a set of neurons, then

So by definition, .

One-Hot

One-Hot is the extreme of SoftMax, where only the largest element remains nonzero, while the others are set to zero.

Synapses

Goal: To get an overview of how neurons pass information between them, and how we can model those communication channels.

So far, we’ve just looked at individual neurons, and how they react to their input. But that input usually comes from other neurons. When a neuron fires an action potential, this travels along its axon.

The junction where one neuron communicates with the next neuron is called a synapse.

A pre-synaptic action potential causes release of neurotransmitter which binds to receptors on the past-synaptic neuron opening ion channels and changing the membrane potential.

Even though an action potential is very fast, the synaptic process by which it affects the next neuron takes time. Some synapses are fast, some are slow. If we represent that time constant using , then the current entering the post-synaptic neuron can be written as

where is chosen so that

This function is called Post-Synaptic Current (PSC) filter, or Post-Synaptic Potential (PSP) filter.

Multiple spikes form what we call a “spike train”, and can be modelled as a sum of Dirac delta functions.

Example

Dirac Delta function

Question

How does a spike train influence the post-synaptic neuron?

You add together the PSC filters, one for each spike. This is convolving the spike train with the PSC filter.

Connection Weight

The total current induced by an action potential onto a particular post-synaptic neuron can vary widely, depending on the number and sizes of the synapses, the amount and type of neurotransmitter, the number and types of receptors etc.

We can combine all those functions into a single number, the connection weight. Thus, the total input to a neuron is a weighted sum of filtered spike-trains.

When we have many pre-synaptic neurons, it is more convenient to use matrix-vector notation to represent the weights and activities.

Suppose we have 2 populations, and . has nodes, has nodes.

If every node sends its output to every node in , then we will have a total of connections, each with its own weight.

flowchart LR

subgraph Y
direction LR
y1
y2
y3
end
x1 -- w11 --> y1
x1 -- w21 --> y2
x1 -- w31 --> y3
x2 -- w21 --> y1
x2 -- w22 --> y2
x2 -- w23 --> y3
subgraph X
direction LR
x1
x2
end