In recent times, we heard how a neurotech startup, Neuralink, plans to improve the human brain’s computation by implanting a minuscule interface onto the brain. The electrodes in the brain-machine interfaces convert neuronal information to commands competent in controlling external systems. The biggest question that arises is how will the signals in your brain be processed.
To understand this, we need to know how neurons are structured in the brain and how they transmit information. Everyone who has been following recent machine learning trends is aware of the 2nd generation Artificial Neural Networks. Artificial Neural Networks are usually fully connected, and they deal with continuous values. ANNs have made tremendous progress in many fields.
However, they do not imitate the mechanism of the brain’s neurons. The next generation of Neural Network, the spiking neural network, aims to ease the application of machine learning in neuroscience.
Get Machine Learning Certification from the World’s top Universities. Earn Masters, Executive PGP, or Advanced Certificate Programs to fast-track your career.
Know How Neurons Transmit Information in the Brain
How is information sent and received by a neuron? Neurons need to transmit information for communicating among themselves. Transmission of the information is done both within the neuron or from one neuron to another. In the human brain, the dendrites usually get information from the sensory receptors. The information received is passed to the axon through the cell body.
Popular AI and ML Blogs & Free Courses
As soon as the information reaches the axon, it moves down the axon’s entire length as an electric signal known as the action potential. On reaching the end of the axon, information needs to be transmitted to the next neuron’s dendrites, if required. There is a synaptic gap present between the axon and the dendrites of the next neuron. This gap can be filled on its own or by the help of neurotransmitters.
Spiking Neural Network
A spiking neural network(SNN) is different from traditional neural networks known in the machine learning community. Spiking neural network operates on spikes. Spikes are discrete events taking place at specific points of time. Thus, it is different from Artificial Neural Networks that use continuous values. Differential equations represent various biological processes in the event of a spike.
One of the most critical processes is the membrane capacity of the neuron. A neuron spikes when it reaches a specific potential. After a neuron spike, the potential is again re-established for that neuron. It takes some time for a neuron to return to its stable state after firing an action potential. The time interval after reaching membrane potential is known as the refractory period.
In the refractory period, triggering another action potential is quite difficult even if the excitatory inputs are strong. The sodium ion channels ensure that the action potential remains inactivated and does not reach membrane potential. Thus, a neuron does not continue a firing spree even on getting constant excitatory inputs.
The Leaky Integrate-and-Fire(LIF) model is the most common. Spiking Neural Networks are not densely connected.
How Does a Spiking Neural Network Work?
In an SNN, information is encoded in the timing and pattern of spikes generated by individual neurons. Neurons accumulate incoming signals, and when their membrane potential reaches a certain threshold, they generate a spike that propagates to connected neurons. This behavior is modeled using spiking neuron models, such as the Leaky Integrate and Fire model.
In-demand Machine Learning Skills
Application of Spiking Neural Networks:
SNNs have found applications in various domains, including robotics, speech recognition, sensor networks, and neuromorphic engineering. Their ability to process temporal information makes them particularly suitable for real-time and event-based applications.
Advantages of SNN
- Capture temporal patterns and dynamics effectively.
- Energy-efficient processing due to sparse spike representation.
- Suitable for real-time and event-driven applications.
- Robustness to noise and variations in input data.
Disadvantages of SNN
- Complexity in training and model design.
- Limited availability of mature tools and libraries.
- Computational overhead due to spike-based computations.
- Interpretability challenges due to the distributed nature of information representation.
Differential Equation for membrane capacity in the LIF model
In the spiking neural network, neurons are not discharged at every propagation cycle. The firing of neurons is only when the membrane potential reaches a certain value. As soon as a neuron is discharged, it produces a signal. This signal reaches other neurons and changes their membrane potential. Spike train provides us with increased potential to process Spatio-temporal data.
The spatial characteristic points to neurons being only connected to other neurons that are local to them. Thus, the processing of inputs works similarly to a Convolutional Neural Network that uses a filter. The temporal characteristic mentions that spikes occur at a particular time. The information lost in binary encoding is retrieved in the form of temporal information from the spikes.
This permits us to process temporal data naturally, without making cumbersome as in Recurrent Neural Networks. We have proofs showing how spiking neural networks have greater computation power than traditional artificial neural networks.
One question that may arise is why Spiking Neural Networks are not as widely used as traditional neural networks despite being computationally more powerful. The main reason behind not using SNNs frequently is a lack of training algorithms. There are unsupervised biological learning algorithms like Hebbian learning and STDP, but there is a lack of supervised training methods for SNNs.
As spike trains cannot be differentiated, we cannot train Spiking Neural Networks using conventional methods such as gradient descent without losing specific temporal information. Thus, we need to research and develop an efficient supervised learning algorithm for Spiking Neural Network to use it in real-life scenarios. It is a difficult job as we need to know thoroughly how the brain gains an understanding and transmits information between neurons.
Traditional Neural Network Vs. SNN
Unlike traditional neural networks, which operate on continuous activation values, SNNs process information based on temporal dynamics. This temporal processing enables SNNs to capture fine-grained temporal patterns, making them suitable for tasks like time-series analysis, event recognition, and sequence learning.
Membrane Potential Behavior During a Spike
During a spike, the membrane potential of a neuron undergoes rapid depolarization followed by a refractory period. This behavior allows for precise timing and synchronization of neuronal activity, facilitating complex information processing.
A spike train denotes a two-dimensional plot of membrane potential and time having multiple spikes. The neuron discharged at a certain time interval can hold much more information.
Various spiking patterns
The parameters a,b,c, and d shown above belong to Izhikevich model neurons.
Spike Trains for a Network of 3 Neurons
In a network of three neurons, each neuron receives input spikes from the previous layer or external sources. The pattern and timing of these input spikes affect the membrane potential of the receiving neurons, determining their firing behavior and subsequent spike generation.
Information Representation: The Spike
In SNNs, information is encoded in the timing and rate of spikes. The precise timing of spikes carries essential information, allowing SNNs to capture the temporal dynamics of the input data.
Leaky Integrate and Fire
The Leaky Integrate and Fire model is a widely used spiking neuron model. It simulates the behavior of a neuron by integrating incoming signals over time and generating a spike when the membrane potential reaches a threshold. After a spike, the membrane potential is reset, accounting for leakage or dissipation of charge.
SNNs employ various encoding schemes to represent information. Rank Order Coding assigns importance based on the rank order of spike timings, while Population Order Coding represents information through the relative firing rates of neuronal populations.
Images to Spiketrains
SNNs can process visual information by converting images into spike trains. This conversion allows for efficient processing of visual data, enabling tasks such as image recognition and object tracking.
Rank Order Coding and Population Order Coding
Rank Order Coding emphasizes the precise timing of spikes, enabling robust representation of temporal patterns. Population Order Coding, on the other hand, utilizes the relative firing rates of neuronal populations to encode information.
Dynamic Vision Sensors (DVS)
Dynamic Vision Sensors are specialized sensors that capture visual information in the form of temporal changes, similar to the behavior of SNNs. DVS technology complements SNNs by providing event-driven visual input, facilitating efficient processing of dynamic visual scenes.
Training the SNN
Training an SNN involves adjusting the synaptic weights between neurons to optimize network performance. Various learning rules, such as Synaptic Time Dependent Plasticity (STDP) and SpikeProp, have been developed to train SNNs effectively.
Synaptic Time Dependent Plasticity (STDP)
STDP is a learning rule that modifies the strength of synapses based on the precise timing of pre- and post-synaptic spikes. This rule enables SNNs to adapt their connections and learn from temporal patterns in the input data.
SpikeProp is a learning algorithm specifically designed for training SNNs. It combines backpropagation-like techniques with STDP, enabling supervised learning in spiking neuron models.
Implementation in Python
Implementing spiking neural network Python is made easier with libraries and frameworks such as Brian2, NEST, and BindsNET. These tools provide the necessary functionality to simulate spiking neuron models and train SNNs efficiently.
The future of Spiking Neural Network is quite ambiguous. SNNs are referred to as the successors of the current neural networks, but there is a long way to go. Implementation of Spiking Neural Networks is still difficult in most practical tasks. SNNs have real-time applications in the field of image and audio processing.
However, the number of applications in these fields remains sparse. The research papers on Spiking Neural Networks are mostly theoretical. In some cases, performance analysis of SNNs is shown under a fully connected neural network. There is a huge scope of research in this domain as a major part is still unexplored.
If you are interested to learn about Machine learning in cloud, upGrad in collaboration with IIIT-Bangalore, offers the Master of Science in Machine Learning & AI. The course will equip you with the necessary skills for this role: math, data wrangling, statistics, programming, cloud-related skills, as well as ready you for getting the job of your dreams.