Encoding positions of the end-effector using the SNS-Toolbox
There is an example of how to map joint angles into positions of the leg. To get a basic understanding of how to design neural networks using SNS, please go to the:
https://sns-toolbox.readthedocs.io/en/latest/#
This example is based on the conference paper: Zadokha, B., Szczecinski, N. S.: Encoding 3D Leg Kinematics using Spatially-Distributed, Population Coded Network Model. (2024)


Initial Setup
The neural network was designed and simulated through PyCharm 2022.
For simulation, SNS requires a time vector:
Training Data
Forward Kinematics
For forward kinematics calculations, the Product of Exponentials formula was used:
To calculate all exponents, Rodrigues' rotation formula was used:
where S with hat is a skew-symmetric representation of a screw axis matrix S
Zero position of the leg and axis matrix:
The network use only servos 2 and 3 as inputs. To generate data x, y, and z positions were calculated using joint angle vector from -1.6 to 1.6 rads:
Inputs
Generating two patterns of joint angles over time:
Bell curve activation function
For every joint, sensory neurons to get an encoded implementation of an input are defined by using the Gaussian bell curve equation:
Neural Network Design
widthBell is the width of the sensory-encoding bell curves. Also note that smaller value of widthBell makes the curves broader, delay is a membrane capacitance of the neuron
Create a network object:
Create a basic nonspiking neuron:
Create the "bell curve" neurons, which are the sensory (input) neurons. Each sensory neuron has a bell-curve receptive field with a unique "preferred angle", that is, the angle at which the peak response is evoked:
Define key neuron and synapse parameters. 'mag' is like R in the functional subnetwork publications: it is the maximum expected neural activation. delE is the synaptic reversal potential relative to the rest potential of the neurons:
Initialize empty arrays to store input current for each set of sensory neurons (theta2 = input2, theta3 = input3), the input current for the whole network (inputNet), and the validation data X(t) (ActualT):
Each sensory neuron synapses onto a number of "combo" neurons. Each combo neuron receives synaptic input from one sensory neuron from each joint. All these synapses may be identical, because they are simply generating all the possible combinations of the joint angles/independent variables. All the combo neurons may be identical, because they are simply integrating the joint angles:
For the network to perform the desired calculation, the synapses from each combo neuron to the output neuron should have effective gains (i.e., Vout/Vcombo) that are proportional to the value that the output neuron encodes. Here, we take all the "training data", that is, the actual X coordinate of the leg, normalize it to values between 0 and 1, and use those to set the unique properties of the synapse from each combo neuron to the output neuron:
Now we create the each combo neuron, its input connections, and its output connections. Here, we have 2 nested for loops, one for each joint/independent variable. A higher-dimensional network would require more for loops or an approach in which we NDgrid the joint angles/sensory neurons and then use a single for loop:
SNS-Toolbox does not enable synapses with max_cond = 0, so if kSyn = 0, we must pass it machine epsilon instead:
Synapses from the combo neurons to the output neuron(s) each have unique conductance value that corresponds to the desired output in that scenario. Note that the e_lo for the synapses = mag and e_hi = 2*mag, because multiple combo neurons will be active at any time, but we only care about the most active, so setting e_lo this way serves as a threshold mechanism:
The input current for the theta2 and theta3 sensory neurons using the bellCurve function:
Concatenate the inputs into one network input array. Calculate the "Actual X, Y, Z values as a function of time", ActualT:
Create an output matrix for the network:
Compile and render the network model:
Simulate the network response:
Note: to get an actual prediction for x, y, and z positions those output vectors must be denormalized:
Last updated
Was this helpful?