【学术会议前沿信息|科研必备】CNKI+EI+Scopus+Inspec+谷歌检索|数学与ML、应用统计、建模与算法、AI与教育、建筑研究与生态环境学术会议!

【学术会议前沿信息|科研必备】CNKI+EI+Scopus+Inspec+谷歌检索|数学与ML、应用统计、建模与算法、AI与教育、建筑研究与生态环境学术会议!



欢迎铁子们点赞、关注、收藏!
祝大家逢考必过!逢投必中!上岸上岸上岸!upupup

大多数高校硕博生毕业要求需要参加学术会议,发表EI或者SCI检索的学术论文会议论文。详细信息可扫描博文下方二维码 “学术会议小灵通”或参考学术信息专栏:https://blog.csdn.net/2401_89898861/article/details/148877490


🧮 数学与机器学习|第三届ICMML 2025ACM会议

  • 🚀 会议名称:第三届数学与机器学习国际学术会议(ICMML 2025)
  • 📅 时间:2025年11月14-16日
  • 📍 地点:江苏-南京
  • ✨ 亮点:5-7天快速审稿!ACM出版稳定检索,古都南京见证数学与AI完美融合!
  • 📚 检索:EI Compendex, Scopus
  • 👨🔢 适合人群:数学建模、机器学习、算法理论领域硕博生,追求跨学科创新与金陵文化体验!
  • 领域:优化算法、数学基础——算法:梯度下降优化算法实现
class GradientDescentOptimizer:
    """梯度下降优化算法"""
    def __init__(self, learning_rate=0.01, max_iter=1000, tolerance=1e-6):
        self.learning_rate = learning_rate
        self.max_iter = max_iter
        self.tolerance = tolerance
        self.loss_history = []
        
    def quadratic_function(self, x):
        """示例二次函数: f(x) = x^2 + 2x + 1"""
        return x**2 + 2*x + 1
    
    def quadratic_gradient(self, x):
        """二次函数的梯度:f(x) = 2x + 2"""
        return 2*x + 2
    
    def rosenbrock_function(self, x, y):
        """Rosenbrock函数(测试优化算法的常用函数)"""
        return (1 - x)**2 + 100 * (y - x**2)**2
    
    def rosenbrock_gradient(self, x, y):
        """Rosenbrock函数的梯度"""
        dx = -2 * (1 - x) - 400 * x * (y - x**2)
        dy = 200 * (y - x**2)
        return np.array([dx, dy])
    
    def optimize_1d(self, initial_x):
        """一维梯度下降优化"""
        x = initial_x
        self.loss_history = []
        
        for i in range(self.max_iter):
            # 计算梯度和损失
            grad = self.quadratic_gradient(x)
            loss = self.quadratic_function(x)
            self.loss_history.append(loss)
            
            # 更新参数
            x_new = x - self.learning_rate * grad
            
            # 检查收敛
            if abs(x_new - x) < self.tolerance:
                print(f"在 {i} 次迭代后收敛")
                break
                
            x = x_new
        
        return x, self.quadratic_function(x)
    
    def optimize_2d(self, initial_point):
        """二维梯度下降优化"""
        point = np.array(initial_point)
        self.loss_history = []
        trajectory = [point.copy()]
        
        for i in range(self.max_iter):
            # 计算梯度和损失
            grad = self.rosenbrock_gradient(point[0], point[1])
            loss = self.rosenbrock_function(point[0], point[1])
            self.loss_history.append(loss)
            
            # 更新参数
            new_point = point - self.learning_rate * grad
            trajectory.append(new_point.copy())
            
            # 检查收敛
            if np.linalg.norm(new_point - point) < self.tolerance:
                print(f"在 {i} 次迭代后收敛")
                break
                
            point = new_point
        
        return point, self.rosenbrock_function(point[0], point[1]), trajectory
    
    def plot_optimization_1d(self):
        """绘制一维优化过程"""
        plt.figure(figsize=(10, 6))
        x_vals = np.linspace(-3, 1, 100)
        y_vals = self.quadratic_function(x_vals)
        
        plt.plot(x_vals, y_vals, 'b-', label='目标函数')
        plt.plot(self.loss_history, 'r--', label='损失历史')
        plt.title('一维梯度下降优化')
        plt.xlabel('迭代次数')
        plt.ylabel('函数值')
        plt.legend()
        plt.grid(True)
        plt.show()
    
    def plot_optimization_2d(self, trajectory):
        """绘制二维优化过程"""
        # 创建网格
        x = np.linspace(-2, 2, 100)
        y = np.linspace(-1, 3, 100)
        X, Y = np.meshgrid(x, y)
        Z = self.rosenbrock_function(X, Y)
        
        # 创建3D图形
        fig = plt.figure(figsize=(12, 8))
        ax = fig.add_subplot(111, projection='3d')
        
        # 绘制表面
        surf = ax.plot_surface(X, Y, Z, cmap='viridis', alpha=0.6)
        
        # 绘制优化轨迹
        traj_x = [p[0] for p in trajectory]
        traj_y = [p[1] for p in trajectory]
        traj_z = [self.rosenbrock_function(p[0], p[1]) for p in trajectory]
        
        ax.plot(traj_x, traj_y, traj_z, 'r.-', markersize=10, label='优化轨迹')
        ax.set_xlabel('X')
        ax.set_ylabel('Y')
        ax.set_zlabel('Z')
        ax.set_title('Rosenbrock函数上的梯度下降优化')
        plt.legend()
        plt.show()

📊 应用统计与算法|第三届ASMA 2025IEEE会议

  • 🚀 会议名称:第三届应用统计、建模与先进算法国际学术会议(ASMA 2025)
  • 📅 时间:2025年11月14-16日
  • 📍 地点:哈尔滨
  • ✨ 亮点:IEEE出版双检索保障,冰城哈尔滨助力统计建模与算法创新突破!
  • 📚 检索:EI Compendex, Scopus
  • 👩🔬 适合人群:应用统计、数据建模、算法优化研究者,注重IEEE平台与北国学术氛围!
  • 领域:统计建模、假设检验——算法:线性回归与统计推断
class LinearRegressionWithInference:
    """带统计推断的线性回归"""
    def __init__(self):
        self.coefficients = None
        self.intercept = None
        self.std_errors = None
        self.t_values = None
        self.p_values = None
        self.r_squared = None
        self.residuals = None
        
    def fit(self, X, y, add_constant=True):
        """拟合线性回归模型并进行统计推断"""
        # 添加常数项
        if add_constant:
            X = sm.add_constant(X)
        
        # 使用最小二乘法拟合模型
        model = sm.OLS(y, X)
        results = model.fit()
        
        # 提取结果
        if add_constant:
            self.intercept = results.params[0]
            self.coefficients = results.params[1:]
        else:
            self.intercept = 0
            self.coefficients = results.params
            
        self.std_errors = results.bse
        self.t_values = results.tvalues
        self.p_values = results.pvalues
        self.r_squared = results.rsquared
        self.residuals = results.resid
        
        return results
    
    def predict(self, X, add_constant=True):
        """使用拟合的模型进行预测"""
        if add_constant:
            X = sm.add_constant(X)
        return np.dot(X, np.concatenate([[self.intercept], self.coefficients]))
    
    def summary(self, feature_names=None):
        """打印模型摘要"""
        if feature_names is None:
            feature_names = [f'X{i}' for i in range(len(self.coefficients))]
        
        print("=" * 50)
        print("线性回归结果摘要")
        print("=" * 50)
        print(f"{'变量':<10} {'系数':<10} {'标准误':<10} {'t值':<10} {'P值':<10}")
        print("-" * 50)
        
        print(f"{'截距':<10} {self.intercept:<10.4f} {self.std_errors[0]:<10.4f} "
              f"{self.t_values[0]:<10.4f} {self.p_values[0]:<10.4f}")
        
        for i, name in enumerate(feature_names):
            print(f"{name:<10} {self.coefficients[i]:<10.4f} {self.std_errors[i+1]:<10.4f} "
                  f"{self.t_values[i+1]:<10.4f} {self.p_values[i+1]:<10.4f}")
        
        print("-" * 50)
        print(f"R²: {self.r_squared:.4f}")
        print("=" * 50)
    
    def residual_analysis(self, y_true, y_pred):
        """残差分析"""
        residuals = y_true - y_pred
        
        plt.figure(figsize=(15, 10))
        
        # 残差图
        plt.subplot(2, 2, 1)
        plt.scatter(y_pred, residuals, alpha=0.6)
        plt.axhline(y=0, color='r', linestyle='--')
        plt.xlabel('预测值')
        plt.ylabel('残差')
        plt.title('残差vs预测值')
        
        # Q-Q图
        plt.subplot(2, 2, 2)
        sm.qqplot(residuals, line='s', ax=plt.gca())
        plt.title('Q-Q图')
        
        # 残差直方图
        plt.subplot(2, 2, 3)
        plt.hist(residuals, bins=30, alpha=0.7, edgecolor='black')
        plt.xlabel('残差')
        plt.ylabel('频率')
        plt.title('残差分布')
        
        # 残差自相关图
        plt.subplot(2, 2, 4)
        pd.Series(residuals).autocorr()  # 只是为了展示,实际需要更复杂的自相关分析
        plt.text(0.5, 0.5, f'一阶自相关: {pd.Series(residuals).autocorr():.3f}', 
                ha='center', va='center', transform=plt.gca().transAxes, fontsize=12)
        plt.axis('off')
        plt.title('残差自相关')
        
        plt.tight_layout()
        plt.show()
        
        # 正态性检验
        _, normality_pvalue = stats.normaltest(residuals)
        print(f"残差正态性检验p值: {normality_pvalue:.4f}")
        
        return residuals

🎓 人工智能与教育|第四届ICAIE 2025ACM会议

  • 🚀 会议名称:第四届人工智能与教育国际学术会议(ICAIE 2025)
  • 📅 时间:2025年11月21-23日
  • 📍 地点:中国-南京
  • ✨ 亮点:高录用快检索!ACM三库保障,双城南京聚焦AI教育应用创新!
  • 📚 检索:EI Compendex, Scopus, Google Scholar
  • 👨🏫 适合人群:教育科技、智能教学、AI应用领域硕博生,追求稳定检索与教育创新实践!
  • 领域:个性化学习、教育数据分析——算法:基于知识追踪的学习状态预测
class KnowledgeTracingModel(nn.Module):
    """基于神经网络的知识追踪模型"""
    def __init__(self, n_skills, hidden_size=50, n_layers=1):
        super(KnowledgeTracingModel, self).__init__()
        self.n_skills = n_skills
        self.hidden_size = hidden_size
        self.n_layers = n_layers
        
        # 技能嵌入层
        self.skill_embedding = nn.Embedding(n_skills + 1, hidden_size, padding_idx=0)
        
        # LSTM层
        self.lstm = nn.LSTM(hidden_size * 2, hidden_size, n_layers, batch_first=True)
        
        # 输出层
        self.output_layer = nn.Linear(hidden_size, 1)
        
        # Dropout层
        self.dropout = nn.Dropout(0.3)
    
    def forward(self, skill_seq, correct_seq):
        """前向传播"""
        batch_size, seq_len = skill_seq.size()
        
        # 嵌入技能ID
        skill_embedded = self.skill_embedding(skill_seq)
        
        # 将正确与否的信息与技能嵌入结合
        correct_seq = correct_seq.unsqueeze(-1).float()
        combined = torch.cat([skill_embedded, correct_seq.expand(-1, -1, self.hidden_size)], dim=-1)
        
        # 通过LSTM
        lstm_out, _ = self.lstm(combined)
        lstm_out = self.dropout(lstm_out)
        
        # 预测下一个回答的正确概率
        output = torch.sigmoid(self.output_layer(lstm_out))
        
        return output.squeeze(-1)

class EducationalKnowledgeTracer:
    """教育知识追踪系统"""
    def __init__(self, n_skills, hidden_size=50, n_layers=1):
        self.model = KnowledgeTracingModel(n_skills, hidden_size, n_layers)
        self.optimizer = optim.Adam(self.model.parameters(), lr=0.001)
        self.criterion = nn.BCELoss()
        
    def prepare_sequences(self, df, student_id_col, skill_col, correct_col, max_seq_len=50):
        """准备训练序列"""
        sequences = []
        
        for student_id, student_data in df.groupby(student_id_col):
            skills = student_data[skill_col].values
            corrects = student_data[correct_col].values
            
            # 创建序列对
            for i in range(1, len(skills)):
                seq_start = max(0, i - max_seq_len)
                skill_seq = skills[seq_start:i]
                correct_seq = corrects[seq_start:i]
                target = corrects[i]
                
                sequences.append((skill_seq, correct_seq, target))
        
        return sequences
    
    def train(self, sequences, epochs=10, batch_size=32):
        """训练模型"""
        self.model.train()
        
        for epoch in range(epochs):
            total_loss = 0
            correct_predictions = 0
            total_predictions = 0
            
            # 随机打乱数据
            np.random.shuffle(sequences)
            
            for i in range(0, len(sequences), batch_size):
                batch = sequences[i:i+batch_size]
                
                if not batch:
                    continue
                
                # 准备批次数据
                max_len = max(len(skill_seq) for skill_seq, _, _ in batch)
                
                skill_batch = []
                correct_batch = []
                target_batch = []
                
                for skill_seq, correct_seq, target in batch:
                    # 填充序列
                    padded_skill = np.pad(skill_seq, (max_len - len(skill_seq), 0), 'constant')
                    padded_correct = np.pad(correct_seq, (max_len - len(correct_seq), 0), 'constant')
                    
                    skill_batch.append(padded_skill)
                    correct_batch.append(padded_correct)
                    target_batch.append(target)
                
                # 转换为张量
                skill_tensor = torch.LongTensor(skill_batch)
                correct_tensor = torch.LongTensor(correct_batch)
                target_tensor = torch.FloatTensor(target_batch)
                
                # 前向传播
                self.optimizer.zero_grad()
                outputs = self.model(skill_tensor, correct_tensor)
                
                # 只使用最后一个时间步的预测
                predictions = outputs[:, -1]
                
                # 计算损失
                loss = self.criterion(predictions, target_tensor)
                
                # 反向传播
                loss.backward()
                self.optimizer.step()
                
                total_loss += loss.item()
                
                # 计算准确率
                binary_predictions = (predictions > 0.5).float()
                correct_predictions += (binary_predictions == target_tensor).sum().item()
                total_predictions += len(target_tensor)
            
            accuracy = correct_predictions / total_predictions if total_predictions > 0 else 0
            avg_loss = total_loss / (len(sequences) / batch_size)
            
            print(f"Epoch {epoch+1}/{epochs}, Loss: {avg_loss:.4f}, Accuracy: {accuracy:.4f}")
    
    def predict(self, skill_seq, correct_seq):
        """预测下一个回答的正确概率"""
        self.model.eval()
        
        with torch.no_grad():
            skill_tensor = torch.LongTensor([skill_seq])
            correct_tensor = torch.LongTensor([correct_seq])
            
            output = self.model(skill_tensor, correct_tensor)
            prediction = output[0, -1].item()
        
        return prediction
    
    def evaluate(self, test_sequences):
        """评估模型性能"""
        self.model.eval()
        all_predictions = []
        all_targets = []
        
        with torch.no_grad():
            for skill_seq, correct_seq, target in test_sequences:
                prediction = self.predict(skill_seq, correct_seq)
                all_predictions.append(prediction)
                all_targets.append(target)
        
        # 计算评估指标
        binary_predictions = [1 if p > 0.5 else 0 for p in all_predictions]
        accuracy = accuracy_score(all_targets, binary_predictions)
        auc = roc_auc_score(all_targets, all_predictions)
        
        print(f"测试集准确率: {accuracy:.4f}")
        print(f"测试集AUC: {auc:.4f}")
        
        return accuracy, auc

🏗️ 建筑研究与生态环境|第七届ARFEE 2025国际研讨会

  • 🚀 会议名称:第七届建筑学研究前沿与生态环境国际研讨会(ARFEE 2025)
  • 📅 时间:2025年11月21-23日
  • 📍 地点:长沙
  • ✨ 亮点:E3S多库检索!湘江长沙融合建筑设计与生态环保创新理念!
  • 📚 检索:CNKI, EI Compendex, Scopus, Inspec
  • 👩🏭 适合人群:建筑设计、生态环境、可持续发展领域研究者,注重实践应用与跨学科交流!
  • 领域:建筑能耗优化、可持续发展——算法:基于遗传算法的建筑能耗优化
class BuildingEnergyOptimizer:
    """基于遗传算法的建筑能耗优化"""
    def __init__(self, population_size=50, generations=100, mutation_rate=0.1):
        self.population_size = population_size
        self.generations = generations
        self.mutation_rate = mutation_rate
        self.best_fitness_history = []
        self.avg_fitness_history = []
        
    def create_individual(self):
        """创建个体(建筑设计方案)"""
        # 参数: [窗户面积比, 墙体保温厚度, 屋顶保温厚度, 遮阳系数, 通风率]
        individual = [
            random.uniform(0.2, 0.8),    # 窗户面积比 (20%-80%)
            random.uniform(0.05, 0.2),   # 墙体保温厚度 (5-20cm)
            random.uniform(0.1, 0.3),    # 屋顶保温厚度 (10-30cm)
            random.uniform(0.3, 0.9),    # 遮阳系数 (0.3-0.9)
            random.uniform(0.5, 2.0)     # 通风率 (0.5-2.0 ACH)
        ]
        return individual
    
    def create_population(self):
        """创建初始种群"""
        return [self.create_individual() for _ in range(self.population_size)]
    
    def energy_simulation(self, individual, climate_data):
        """简化的建筑能耗模拟"""
        window_ratio, wall_insulation, roof_insulation, shading_coeff, ventilation_rate = individual
        
        # 简化的能耗计算(实际应用需要更复杂的模拟)
        # 基本能耗(与建筑特性相关)
        base_energy = 100  # kWh//year
        
        # 窗户影响
        window_impact = 50 * (window_ratio - 0.3)  # 最佳窗户比例约30%
        
        # 保温影响
        insulation_impact = -30 * (wall_insulation * 100 + roof_insulation * 100)  # 每厘米保温节省能耗
        
        # 遮阳影响
        shading_impact = -40 * (shading_coeff - 0.6)  # 最佳遮阳系数约0.6
        
        # 通风影响
        ventilation_impact = 20 * (ventilation_rate - 1.0)  # 最佳通风率约1.0 ACH
        
        # 总能耗
        total_energy = base_energy + window_impact + insulation_impact + shading_impact + ventilation_impact
        
        # 确保能耗为正
        total_energy = max(total_energy, 10)
        
        return total_energy
    
    def comfort_evaluation(self, individual, climate_data):
        """舒适度评估"""
        window_ratio, wall_insulation, roof_insulation, shading_coeff, ventilation_rate = individual
        
        # 简化的舒适度评分(0-100)
        # 热舒适(与保温和通风相关)
        thermal_comfort = 50 + 20 * (wall_insulation * 100 + roof_insulation * 100) + 10 * ventilation_rate
        
        # 视觉舒适(与窗户比例和遮阳相关)
        visual_comfort = 30 + 40 * window_ratio - 20 * shading_coeff
        
        # 总舒适度(加权平均)
        total_comfort = 0.6 * thermal_comfort + 0.4 * visual_comfort
        
        return min(max(total_comfort, 0), 100)
    
    def fitness_function(self, individual, climate_data):
        """适应度函数(最大化舒适度,最小化能耗)"""
        energy = self.energy_simulation(individual, climate_data)
        comfort = self.comfort_evaluation(individual, climate_data)
        
        # 适应度 = 舒适度 / 能耗(希望舒适度高且能耗低)
        fitness = comfort / energy
        
        return fitness, energy, comfort
    
    def select_parents(self, population, fitnesses):
        """选择父母(轮盘赌选择)"""
        total_fitness = sum(fitnesses)
        probabilities = [f / total_fitness for f in fitnesses]
        
        # 使用累积概率进行选择
        cumulative_probabilities = np.cumsum(probabilities)
        selected_indices = []
        
        for _ in range(2):  # 选择两个父母
            r = random.random()
            for i, cp in enumerate(cumulative_probabilities):
                if r <= cp:
                    selected_indices.append(i)
                    break
        
        return [population[i] for i in selected_indices]
    
    def crossover(self, parent1, parent2):
        """交叉操作(均匀交叉)"""
        child = []
        for i in range(len(parent1)):
            if random.random() < 0.5:
                child.append(parent1[i])
            else:
                child.append(parent2[i])
        return child
    
    def mutate(self, individual):
        """变异操作"""
        mutated = individual.copy()
        for i in range(len(mutated)):
            if random.random() < self.mutation_rate:
                # 根据参数范围进行变异
                if i == 0:  # 窗户面积比
                    mutated[i] = random.uniform(0.2, 0.8)
                elif i == 1:  # 墙体保温厚度
                    mutated[i] = random.uniform(0.05, 0.2)
                elif i == 2:  # 屋顶保温厚度
                    mutated[i] = random.uniform(0.1, 0.3)
                elif i == 3:  # 遮阳系数
                    mutated[i] = random.uniform(0.3, 0.9)
                elif i == 4:  # 通风率
                    mutated[i] = random.uniform(0.5, 2.0)
        return mutated
    
    def optimize(self, climate_data):
        """执行优化"""
        # 创建初始种群
        population = self.create_population()
        
        # 记录最佳解
        best_individual = None
        best_fitness = -float('inf')
        best_energy = float('inf')
        best_comfort = 0
        
        for generation in range(self.generations):
            # 评估种群中每个个体的适应度
            fitnesses = []
            energies = []
            comforts = []
            
            for individual in population:
                fitness, energy, comfort = self.fitness_function(individual, climate_data)
                fitnesses.append(fitness)
                energies.append(energy)
                comforts.append(comfort)
                
                # 更新最佳解
                if fitness > best_fitness:
                    best_fitness = fitness
                    best_individual = individual
                    best_energy = energy
                    best_comfort = comfort
            
            # 记录历史数据
            self.best_fitness_history.append(best_fitness)
            self.avg_fitness_history.append(np.mean(fitnesses))
            
            # 创建新一代
            new_population = []
            
            # 保留精英(最佳个体直接进入下一代)
            new_population.append(best_individual)
            
            # 生成剩余个体
            while len(new_population) < self.population_size:
                # 选择父母
                parents = self.select_parents(population, fitnesses)
                
                # 交叉
                child = self.crossover(parents[0], parents[1])
                
                # 变异
                child = self.mutate(child)
                
                new_population.append(child)
            
            population = new_population
            
            # 打印进度
            if (generation + 1) % 10 == 0:
                print(f"Generation {generation + 1}/{self.generations}, "
                      f"Best Fitness: {best_fitness:.4f}, "
                      f"Energy: {best_energy:.2f} kWh/m², "
                      f"Comfort: {best_comfort:.1f}")
        
        return best_individual, best_fitness, best_energy, best_comfort
    
    def plot_optimization_history(self):
        """绘制优化历史"""
        plt.figure(figsize=(10, 6))
        generations = range(1, len(self.best_fitness_history) + 1)
        
        plt.plot(generations, self.best_fitness_history, 'b-', label='最佳适应度')
        plt.plot(generations, self.avg_fitness_history, 'r--', label='平均适应度')
        plt.xlabel('代数')
        plt.ylabel('适应度')
        plt.title('遗传算法优化历史')
        plt.legend()
        plt.grid(True)
        plt.show()
    
    def analyze_solution(self, solution, climate_data):
        """分析优化结果"""
        window_ratio, wall_insulation, roof_insulation, shading_coeff, ventilation_rate = solution
        
        print("=" * 50)
        print("建筑能耗优化结果分析")
        print("=" * 50)
        print(f"窗户面积比: {window_ratio:.3f} ({window_ratio*100:.1f}%)")
        print(f"墙体保温厚度: {wall_insulation:.3f} m ({wall_insulation*100:.1f} cm)")
        print(f"屋顶保温厚度: {roof_insulation:.3f} m ({roof_insulation*100:.1f} cm)")
        print(f"遮阳系数: {shading_coeff:.3f}")
        print(f"通风率: {ventilation_rate:.3f} ACH")
        print("-" * 50)
        
        energy = self.energy_simulation(solution, climate_data)
        comfort = self.comfort_evaluation(solution, climate_data)
        fitness = comfort / energy
        
        print(f"预测能耗: {energy:.2f} kWh/m²/year")
        print(f"预测舒适度: {comfort:.1f}/100")
        print(f综合指标: {fitness:.4f}")
        print("=" * 50)
  • 📢 投稿机遇稍纵即逝,让学术智慧与特色城市交相辉映!稳定检索等你来拿!📢
Logo

有“AI”的1024 = 2048,欢迎大家加入2048 AI社区

更多推荐