Research in Brain-inspired Computing [1]-果蝇大脑被上传
摘要: 大语言模型代表第二代AI技术,而近期Eon Systems基于LIF模型扫描并重构果蝇全脑,实现"数字大脑",标志着第三代AI技术(类脑计算)即将爆发。本文结合粒子群优化(PSO)与脉冲神经网络(SNN)探索AI优化方法:PSO通过粒子群协作搜索最优解,其速度与位置更新依赖个体最优(pbest)和全局最优(
文章目录
As is well known, large language models represent second-generation AI technology. With the help of RAG and fine-tuning, they have been widely applied across various industries. However, a recent news story has quickly gone viral: the entire brain of a fruit fly has been fully uploaded and used to drive a virtual body! Silicon Valley company Eon Systems, based on the LIF model, scanned the entire brain of an adult fruit fly using electron microscopy and reconstructed it with AI, creating a complete “digital brain.” This signals that third-generation AI technology is on the verge of a comprehensive explosion, and brain-like computing is not far off. In the future, it is very likely that the human brain could be digitized.
1. Particle Swarm Optimization (PSO)
Particle Swarm Optimization is a population-based stochastic optimization technique inspired by the social behavior of bird flocking.
- Particle: Each particle represents a candidate solution in the search space. It has a position vector (the parameters to optimize) and a velocity vector (direction and step size). Each particle also remembers its own best position (
pbest) and the corresponding fitness value. - Swarm: The entire group shares a global best position (
gbest) – the best position found by any particle. - Update Rules: At each iteration, every particle updates its velocity and position based on its current velocity, its personal best, and the global best:
v i t + 1 = w ⋅ v i t + c 1 r 1 ( p b e s t i − x i t ) + c 2 r 2 ( g b e s t − x i t ) v_{i}^{t+1} = w \cdot v_{i}^{t} + c_1 r_1 (pbest_i - x_i^t) + c_2 r_2 (gbest - x_i^t) vit+1=w⋅vit+c1r1(pbesti−xit)+c2r2(gbest−xit)
where w w wis the inertia weight, c 1 , c 2 c_1, c_2 c1,c2are acceleration coefficients, and r 1 , r 2 r_1, r_2 r1,r2are random numbers in [0,1]. Position update: x i t + 1 = x i t + v i t + 1 x_i^{t+1} = x_i^t + v_i^{t+1} xit+1=xit+vit+1. - Boundary Handling: If a particle flies outside the search range, it is pulled back to the boundary and its velocity is set to zero (absorbing walls).
- Fitness Function: A function that evaluates how good a particle’s position is. In this code,
fitness_functionmeasures the classification performance of the SNN on handwritten digits.
2. LIF Neuron Model and Spiking Neural Network (SNN)
LIF (Leaky Integrate-and-Fire) model is a simplified neuron model describing the dynamics of the membrane potential:
- Integration: The neuron receives an input current I ext I_{\text{ext}} Iext, causing the membrane potential v v vto rise.
- Leak: The potential naturally decays toward the resting potential V rest V_{\text{rest}} Vrest, governed by the time constant τ m \tau_m τm.
- Spike: When v v vexceeds a threshold V thresh V_{\text{thresh}} Vthresh, the neuron fires a spike (action potential). The potential is then reset to V reset V_{\text{reset}} Vreset, and the neuron enters an absolute refractory period (
refractory) during which it ignores inputs. - Differential equation: τ m d v d t = − ( v − V rest ) + R m I ext \tau_m \frac{dv}{dt} = -(v - V_{\text{rest}}) + R_m I_{\text{ext}} τmdtdv=−(v−Vrest)+RmIext. Discretized using Euler’s method.
Spiking Neural Network (SNN) uses spikes (discrete events) as information carriers, mimicking biological neural systems. Unlike traditional ANNs, SNNs incorporate time and communicate via spikes. The network in this code is a three-layer fully-connected SNN:
- Input layer: 64 neurons (8×8 pixels), each pixel value directly used as input current.
- Hidden layer 1: 20 LIF neurons.
- Hidden layer 2: 20 LIF neurons.
- Output layer: 10 LIF neurons (digits 0–9).
- The weights and biases are trainable parameters that scale the input currents to subsequent layers.
3. Program Explanation
The program is structured as follows:
3.1 Constants
- Simulation parameters (
SIM_TIME,DT,NUM_STEPS, etc.) and LIF parameters (V_REST,V_THRESH, etc.) are defined asconstexpr. - Network dimensions:
N_INPUT,N_HIDDEN1,N_HIDDEN2,N_OUTPUT. - PSO parameters: number of particles, iterations, inertia weight, learning factors, search range, etc.
3.2 Handwritten Digit Samples
SAMPLESis avector<vector<double>>containing 10 handcrafted 8×8 patterns (one per digit).LABELSholds the corresponding labels.
3.3 LIF Neuron Class LIFNeuron
- Members:
v(membrane potential),refractory(remaining refractory steps). - Method:
step(double I_ext)updates the neuron given an input current, returnstrueif a spike occurs. It implements the Euler discretization and threshold check.
3.4 SNN Class
- Contains three neuron layers (
hidden1,hidden2,output) and weight matrices (w_in_h1,w_h1_h2,w_h2_out) and biases (bias_h1,bias_h2,bias_out). - Constructor: Extracts weights and biases from the flat
paramsvector. simulatemethod: Runs the network on a single input sample and returns spike counts for each output neuron. Steps:- Make a copy of the network (
net_copy) to preserve state for independent simulation. - For each time step:
- Compute input current to hidden layer 1 (pixel values × weights + bias).
- Update hidden layer 1, record spikes.
- Compute input to hidden layer 2 based on spikes from hidden layer 1 and weights.
- Update hidden layer 2, record spikes.
- Compute input to output layer based on spikes from hidden layer 2 and weights.
- Update output layer; if a spike occurs, increment the corresponding neuron’s count.
- Make a copy of the network (
3.5 PSO Particle Struct Particle
- Stores position, velocity, personal best position, and personal best fitness.
- Constructor randomly initializes position and velocity; personal best initially set to current position.
3.6 PSO Swarm Class Swarm
- Contains an array of particles, global best position, and global best fitness.
updatemethod performs one PSO iteration:- Evaluate fitness of all particles (calling
fitness_function), update personal bests and global best. - For each particle, update velocity and position using the PSO formula, then apply boundary clipping (absorbing walls).
- Evaluate fitness of all particles (calling
3.7 Fitness Function fitness_function
- For each sample:
- Construct an SNN with the current parameters.
- Simulate the sample to get output spike counts.
- Find the output neuron with the highest spike count.
- If that index matches the label, add its spike count to total fitness; otherwise subtract it.
- Sum over all samples to obtain the final fitness. Maximizing this fitness encourages correct classification with high spike rates.
3.8 Main Function main
- Computes the total dimension
dimof the parameter vector. - Prints basic info.
- Creates a
Swarmobject. - Enters the main PSO loop, printing the best fitness after each iteration.
- After finishing, outputs the best fitness, the first 10 parameters, and the total elapsed time.
4. Program Features
- Written in pure C++17, using
std::vectorfor automatic memory management. - Uses
std::mt19937for high-quality random number generation. - Clear separation of simulation and optimization logic, easy to modify or extend.
1. Meaning of the Differential Equation
The membrane potential dynamics of a LIF (Leaky Integrate-and-Fire) neuron are described by the following differential equation:
τ m d v d t = − ( v − V rest ) + R m I ext \tau_m \frac{dv}{dt} = -(v - V_{\text{rest}}) + R_m I_{\text{ext}} τmdtdv=−(v−Vrest)+RmIext
Where:
- v v vis the membrane potential (in mV), the voltage difference across the cell membrane.
- t t tis time (in ms).
- d v d t \frac{dv}{dt} dtdvis the rate of change of the membrane potential over time.
- τ m \tau_m τmis the membrane time constant (in ms), which determines how quickly the potential responds to input. τ m = R m C m \tau_m = R_m C_m τm=RmCm, with R m R_m Rmbeing the membrane resistance and C m C_m Cmthe membrane capacitance.
- V rest V_{\text{rest}} Vrestis the resting potential, the stable value when no external current is applied.
- R m R_m Rmis the membrane resistance (in MΩ), representing the opposition to ionic current.
- I ext I_{\text{ext}} Iextis the external input current (in nA), originating from synaptic inputs or other stimuli.
This equation captures two key processes:
- Leakage term − ( v − V rest ) -(v - V_{\text{rest}}) −(v−Vrest): When v v vis above V rest V_{\text{rest}} Vrest, this term is negative, driving the potential downward; when below, it is positive. This models passive diffusion of ions through membrane channels, always pushing the potential back toward rest.
- Driving term R m I ext R_m I_{\text{ext}} RmIext: External current flowing through the membrane resistance generates a voltage, increasing the potential.
Thus, the rate of change is a balance between leakage and input drive. With zero input, the potential decays exponentially to rest; with constant current, it rises to a new steady state.
2. Why Discretization?
Computer simulations cannot handle continuous time; we must discretize time into a series of steps (with step size Δ t \Delta t Δt). Therefore, the continuous differential equation must be converted into a discrete difference equation to update the membrane potential at each time step.
3. Euler Method
The Euler method is the simplest numerical integration technique. It approximates the value at the next time step using the slope (derivative) at the current time:
v ( t + Δ t ) ≈ v ( t ) + Δ t ⋅ f ( v ( t ) , t ) v(t + \Delta t) \approx v(t) + \Delta t \cdot f(v(t), t) v(t+Δt)≈v(t)+Δt⋅f(v(t),t)
where d v d t = f ( v , t ) \frac{dv}{dt} = f(v, t) dtdv=f(v,t). Essentially, we add the current slope multiplied by the step size to the current value.
4. Discretizing the LIF Equation
For our LIF model, d v d t = 1 τ m [ − ( v − V rest ) + R m I ext ] \frac{dv}{dt} = \frac{1}{\tau_m} \left[ -(v - V_{\text{rest}}) + R_m I_{\text{ext}} \right] dtdv=τm1[−(v−Vrest)+RmIext]. Applying the Euler method:
v ( t + Δ t ) = v ( t ) + Δ t ⋅ 1 τ m [ − ( v ( t ) − V rest ) + R m I ext ( t ) ] v(t + \Delta t) = v(t) + \Delta t \cdot \frac{1}{\tau_m} \left[ -(v(t) - V_{\text{rest}}) + R_m I_{\text{ext}}(t) \right] v(t+Δt)=v(t)+Δt⋅τm1[−(v(t)−Vrest)+RmIext(t)]
Denoting v t v_t vtas the current potential and v t + 1 v_{t+1} vt+1as the next one, with fixed step Δ t \Delta t Δt(represented by DT in code), we get:
v t + 1 = v t + Δ t τ m [ − ( v t − V rest ) + R m I ext ] v_{t+1} = v_t + \frac{\Delta t}{\tau_m} \left[ -(v_t - V_{\text{rest}}) + R_m I_{\text{ext}} \right] vt+1=vt+τmΔt[−(vt−Vrest)+RmIext]
This is exactly the update implemented in the step_neuron function:
double dv = (DT / TAU_M) * (-(v - V_REST) + R_M * I_ext);
v += dv;
- First compute the change
dv, then add it to the current potential. - After updating, the code checks whether the threshold is exceeded; if so, it emits a spike, resets the potential, and enters a refractory period.
5. Accuracy and Limitations of Euler Method
The Euler method is first-order accurate, meaning the error is proportional to the step size Δ t \Delta t Δt. To ensure stability, the step size should be much smaller than the smallest time constant of the system (here τ m = 10 \tau_m = 10 τm=10ms). Our program uses Δ t = 0.5 \Delta t = 0.5 Δt=0.5ms, which satisfies this condition and yields sufficiently accurate results for this neuron model.
c++ 代码实现
在vs2022中调试通过,具体参数和功能有待于各位朋友去优化和扩充。
/**
* 文件: exalnn.cpp
* 描述: 三层 LIF 脉冲神经网络 + PSO 手写数字识别 (纯 C++ 版)
* 编译: Visual Studio 2022 创建空项目,添加此文件,编译运行 (建议 Release 模式)
*/
#include <iostream>
#include <vector>
#include <random>
#include <cmath>
#include <chrono>
using namespace std;
// ==================== 常量定义 (C++ 风格) ====================
constexpr double SIM_TIME = 500.0; // 仿真时间 (ms)
constexpr double DT = 0.5; // 时间步长 (ms)
constexpr int NUM_STEPS = static_cast<int>(SIM_TIME / DT);
constexpr double T_REFRACT = 2.0; // 不应期 (ms)
constexpr int REFRACT_STEPS = static_cast<int>(T_REFRACT / DT);
// LIF 神经元参数
constexpr double V_REST = -70.0; // 静息电位 (mV)
constexpr double V_RESET = -75.0; // 重置电位
constexpr double V_THRESH = -55.0; // 发放阈值
constexpr double TAU_M = 10.0; // 膜时间常数 (ms)
constexpr double R_M = 10.0; // 膜电阻 (MOhm)
// 网络结构
constexpr int N_INPUT = 64; // 输入层 (8x8像素)
constexpr int N_HIDDEN1 = 20; // 第一隐藏层
constexpr int N_HIDDEN2 = 20; // 第二隐藏层
constexpr int N_OUTPUT = 10; // 输出层 (数字0-9)
// PSO 参数
constexpr int N_PARTICLES = 300; // 粒子数
constexpr int MAX_ITER = 200; // 迭代次数
constexpr double W = 0.7; // 惯性权重
constexpr double C1 = 1.5; // 个体学习因子
constexpr double C2 = 1.5; // 社会学习因子
constexpr double W_MAX = 5.0; // 权重搜索范围 [-W_MAX, W_MAX]
constexpr double INIT_VEL_SCALE = 0.1; // 初始速度比例
// ==================== 手写数字样本数据 (手工定义) ====================
constexpr int N_SAMPLES = 10;
const vector<vector<double>> SAMPLES = {
// 数字 0
{0,1,1,1,1,1,1,0,
1,0,0,0,0,0,0,1,
1,0,0,0,0,0,0,1,
1,0,0,0,0,0,0,1,
1,0,0,0,0,0,0,1,
1,0,0,0,0,0,0,1,
1,0,0,0,0,0,0,1,
0,1,1,1,1,1,1,0},
// 数字 1
{0,0,1,0,0,0,0,0,
0,0,1,0,0,0,0,0,
0,0,1,0,0,0,0,0,
0,0,1,0,0,0,0,0,
0,0,1,0,0,0,0,0,
0,0,1,0,0,0,0,0,
0,0,1,0,0,0,0,0,
0,0,1,0,0,0,0,0},
// 数字 2
{1,1,1,1,1,1,1,1,
0,0,0,0,0,0,0,1,
0,0,0,0,0,0,1,0,
0,0,0,0,0,1,0,0,
0,0,0,0,1,0,0,0,
0,0,0,1,0,0,0,0,
0,0,1,0,0,0,0,0,
1,1,1,1,1,1,1,1},
// 数字 3
{1,1,1,1,1,1,1,0,
0,0,0,0,0,0,0,1,
0,0,0,0,0,0,0,1,
1,1,1,1,1,1,1,0,
0,0,0,0,0,0,0,1,
0,0,0,0,0,0,0,1,
1,1,1,1,1,1,1,0,
0,0,0,0,0,0,0,0},
// 数字 4
{0,0,1,0,0,1,0,0,
0,0,1,0,0,1,0,0,
0,0,1,0,0,1,0,0,
0,0,1,1,1,1,1,0,
0,0,0,0,0,1,0,0,
0,0,0,0,0,1,0,0,
0,0,0,0,0,1,0,0,
0,0,0,0,0,0,0,0},
// 数字 5
{1,1,1,1,1,1,1,1,
1,0,0,0,0,0,0,0,
1,0,0,0,0,0,0,0,
1,1,1,1,1,1,1,0,
0,0,0,0,0,0,0,1,
0,0,0,0,0,0,0,1,
1,1,1,1,1,1,1,0,
0,0,0,0,0,0,0,0},
// 数字 6
{0,1,1,1,1,1,1,0,
1,0,0,0,0,0,0,1,
1,0,0,0,0,0,0,0,
1,0,0,0,0,0,0,0,
1,1,1,1,1,1,1,0,
1,0,0,0,0,0,0,1,
1,0,0,0,0,0,0,1,
0,1,1,1,1,1,1,0},
// 数字 7
{1,1,1,1,1,1,1,1,
0,0,0,0,0,0,0,1,
0,0,0,0,0,0,1,0,
0,0,0,0,0,1,0,0,
0,0,0,0,1,0,0,0,
0,0,0,1,0,0,0,0,
0,0,1,0,0,0,0,0,
0,1,0,0,0,0,0,0},
// 数字 8
{0,1,1,1,1,1,1,0,
1,0,0,0,0,0,0,1,
1,0,0,0,0,0,0,1,
0,1,1,1,1,1,1,0,
1,0,0,0,0,0,0,1,
1,0,0,0,0,0,0,1,
1,0,0,0,0,0,0,1,
0,1,1,1,1,1,1,0},
// 数字 9
{0,1,1,1,1,1,1,0,
1,0,0,0,0,0,0,1,
1,0,0,0,0,0,0,1,
1,0,0,0,0,0,0,1,
0,1,1,1,1,1,1,0,
0,0,0,0,0,0,0,1,
1,0,0,0,0,0,0,1,
0,1,1,1,1,1,1,0}
};
const vector<int> LABELS = { 0, 1, 2, 3, 4, 5, 6, 7, 8, 9 };
// ==================== LIF 神经元类 ====================
struct LIFNeuron {
double v; // 膜电位
int refractory; // 剩余不应期步数
LIFNeuron() : v(V_REST), refractory(0) {}
// 单步更新,返回是否发放脉冲
bool step(double I_ext) {
if (refractory > 0) {
refractory--;
return false;
}
double dv = (DT / TAU_M) * (-(v - V_REST) + R_M * I_ext);
v += dv;
if (v >= V_THRESH) {
v = V_RESET;
refractory = REFRACT_STEPS;
return true;
}
return false;
}
};
// ==================== 三层 SNN 类 ====================
class SNN {
public:
vector<LIFNeuron> hidden1;
vector<LIFNeuron> hidden2;
vector<LIFNeuron> output;
// 权重 (一维数组,按行主序存储)
vector<double> w_in_h1; // [N_HIDDEN1 * N_INPUT]
vector<double> w_h1_h2; // [N_HIDDEN2 * N_HIDDEN1]
vector<double> w_h2_out; // [N_OUTPUT * N_HIDDEN2]
vector<double> bias_h1; // [N_HIDDEN1]
vector<double> bias_h2; // [N_HIDDEN2]
vector<double> bias_out; // [N_OUTPUT]
// 构造函数:从参数向量初始化网络
SNN(const vector<double>& params) {
hidden1.resize(N_HIDDEN1);
hidden2.resize(N_HIDDEN2);
output.resize(N_OUTPUT);
w_in_h1.resize(N_HIDDEN1 * N_INPUT);
w_h1_h2.resize(N_HIDDEN2 * N_HIDDEN1);
w_h2_out.resize(N_OUTPUT * N_HIDDEN2);
bias_h1.resize(N_HIDDEN1);
bias_h2.resize(N_HIDDEN2);
bias_out.resize(N_OUTPUT);
// 解析 params
size_t pos = 0;
for (int i = 0; i < N_HIDDEN1 * N_INPUT; ++i) w_in_h1[i] = params[pos++];
for (int i = 0; i < N_HIDDEN2 * N_HIDDEN1; ++i) w_h1_h2[i] = params[pos++];
for (int i = 0; i < N_OUTPUT * N_HIDDEN2; ++i) w_h2_out[i] = params[pos++];
for (int i = 0; i < N_HIDDEN1; ++i) bias_h1[i] = params[pos++];
for (int i = 0; i < N_HIDDEN2; ++i) bias_h2[i] = params[pos++];
for (int i = 0; i < N_OUTPUT; ++i) bias_out[i] = params[pos++];
}
// 仿真单个样本,返回输出层各神经元脉冲计数
vector<int> simulate(const vector<double>& input) const {
vector<int> spike_counts(N_OUTPUT, 0);
// 每次仿真需要复制神经元状态,因此将网络拷贝一份(非 const 操作)
SNN net_copy = *this; // 利用默认拷贝构造函数(浅拷贝成员 vector 会复制数据)
for (int t = 0; t < NUM_STEPS; ++t) {
// 计算隐藏层1输入
vector<double> I_h1(N_HIDDEN1);
for (int i = 0; i < N_HIDDEN1; ++i) {
I_h1[i] = bias_h1[i];
for (int j = 0; j < N_INPUT; ++j) {
I_h1[i] += w_in_h1[i * N_INPUT + j] * input[j];
}
}
// 更新隐藏层1,记录发放
vector<bool> spike_h1(N_HIDDEN1);
for (int i = 0; i < N_HIDDEN1; ++i) {
spike_h1[i] = net_copy.hidden1[i].step(I_h1[i]);
}
// 计算隐藏层2输入
vector<double> I_h2(N_HIDDEN2, 0.0);
for (int i = 0; i < N_HIDDEN2; ++i) {
I_h2[i] = bias_h2[i];
for (int j = 0; j < N_HIDDEN1; ++j) {
if (spike_h1[j]) {
I_h2[i] += w_h1_h2[i * N_HIDDEN1 + j];
}
}
}
// 更新隐藏层2,记录发放
vector<bool> spike_h2(N_HIDDEN2);
for (int i = 0; i < N_HIDDEN2; ++i) {
spike_h2[i] = net_copy.hidden2[i].step(I_h2[i]);
}
// 计算输出层输入
vector<double> I_out(N_OUTPUT, 0.0);
for (int i = 0; i < N_OUTPUT; ++i) {
I_out[i] = bias_out[i];
for (int j = 0; j < N_HIDDEN2; ++j) {
if (spike_h2[j]) {
I_out[i] += w_h2_out[i * N_HIDDEN2 + j];
}
}
}
// 更新输出层,累计脉冲
for (int i = 0; i < N_OUTPUT; ++i) {
if (net_copy.output[i].step(I_out[i])) {
spike_counts[i]++;
}
}
}
return spike_counts;
}
};
// ==================== PSO 粒子类 ====================
struct Particle {
vector<double> position;
vector<double> velocity;
vector<double> best_pos;
double best_fitness;
Particle(int dim) : position(dim), velocity(dim), best_pos(dim), best_fitness(-1e30) {
static mt19937 rng(static_cast<unsigned>(chrono::steady_clock::now().time_since_epoch().count()));
uniform_real_distribution<double> dist(-W_MAX, W_MAX);
uniform_real_distribution<double> vel_dist(-W_MAX * INIT_VEL_SCALE, W_MAX * INIT_VEL_SCALE);
for (int i = 0; i < dim; ++i) {
position[i] = dist(rng);
velocity[i] = vel_dist(rng);
best_pos[i] = position[i];
}
}
};
// ==================== PSO 群类 ====================
class Swarm {
public:
vector<Particle> particles;
vector<double> global_best_pos;
double global_best_fitness;
int dim;
Swarm(int dim_) : dim(dim_), global_best_pos(dim_), global_best_fitness(-1e30) {
particles.reserve(N_PARTICLES);
for (int i = 0; i < N_PARTICLES; ++i) {
particles.emplace_back(dim);
}
}
// 更新粒子群 (使用给定的适应度函数)
void update(double (*fitness_func)(const vector<double>&)) {
static mt19937 rng(static_cast<unsigned>(chrono::steady_clock::now().time_since_epoch().count()));
uniform_real_distribution<double> dist01(0.0, 1.0);
// 评估适应度
for (auto& p : particles) {
double fitness = fitness_func(p.position);
if (fitness > p.best_fitness) {
p.best_fitness = fitness;
p.best_pos = p.position;
}
if (fitness > global_best_fitness) {
global_best_fitness = fitness;
global_best_pos = p.position;
}
}
// 更新速度和位置
for (auto& p : particles) {
for (int j = 0; j < dim; ++j) {
double r1 = dist01(rng);
double r2 = dist01(rng);
double vel = W * p.velocity[j]
+ C1 * r1 * (p.best_pos[j] - p.position[j])
+ C2 * r2 * (global_best_pos[j] - p.position[j]);
p.velocity[j] = vel;
p.position[j] += vel;
// 边界处理 (吸收壁)
if (p.position[j] > W_MAX) {
p.position[j] = W_MAX;
p.velocity[j] = 0.0;
}
else if (p.position[j] < -W_MAX) {
p.position[j] = -W_MAX;
p.velocity[j] = 0.0;
}
}
}
}
};
// ==================== 适应度函数 (需要全局样本) ====================
double fitness_function(const vector<double>& params) {
double total_fitness = 0.0;
for (int s = 0; s < N_SAMPLES; ++s) {
SNN net(params);
auto spike_counts = net.simulate(SAMPLES[s]);
// 找出脉冲最多的输出神经元
int max_spike = -1;
int max_idx = -1;
for (int i = 0; i < N_OUTPUT; ++i) {
if (spike_counts[i] > max_spike) {
max_spike = spike_counts[i];
max_idx = i;
}
}
if (max_idx == LABELS[s]) {
total_fitness += max_spike;
}
else {
total_fitness -= max_spike;
}
}
return total_fitness;
}
// ==================== 主函数 ====================
int main() {
// 计算参数维度
int dim = N_HIDDEN1 * N_INPUT + N_HIDDEN2 * N_HIDDEN1 + N_OUTPUT * N_HIDDEN2
+ N_HIDDEN1 + N_HIDDEN2 + N_OUTPUT;
cout << "三层 LIF-SNN 手写数字识别 (PSO 优化)" << endl;
cout << "网络结构: " << N_INPUT << " -> " << N_HIDDEN1 << " -> "
<< N_HIDDEN2 << " -> " << N_OUTPUT << endl;
cout << "优化参数维度: " << dim << endl;
cout << "粒子数: " << N_PARTICLES << ", 迭代次数: " << MAX_ITER << endl;
// 初始化粒子群
Swarm swarm(dim);
// 开始计时 (可选)
auto start_time = chrono::steady_clock::now();
// PSO 主循环
for (int iter = 1; iter <= MAX_ITER; ++iter) {
swarm.update(fitness_function);
cout << "迭代 " << iter << ", 最优适应度: " << fixed << swarm.global_best_fitness << endl;
}
auto end_time = chrono::steady_clock::now();
auto elapsed_ms = chrono::duration_cast<chrono::milliseconds>(end_time - start_time).count();
cout << "\n优化完成!最优适应度: " << swarm.global_best_fitness << endl;
cout << "最优参数前10个值: ";
for (int i = 0; i < min(10, dim); ++i) {
cout << swarm.global_best_pos[i] << " ";
}
cout << "\n总耗时: " << elapsed_ms << " ms" << endl;
return 0;
}
更多推荐

所有评论(0)