Swarm and Evolutionary Computation, cilt.104, 2026 (SCI-Expanded, Scopus)
Particle Swarm Optimization (PSO) is widely adopted for continuous optimization; however, its first-order velocity dynamics often suffer from premature convergence, oscillatory instability, and diversity loss, particularly on high-dimensional and structurally complex landscapes. This study proposes a residual-guided Fractional-Langevin PSO (FL–PSO) framework that reformulates the classical velocity update within a fractional–stochastic dynamical system. The residual correction term is analytically derived from the first-order linearization of an underlying fractional fixed-point operator, establishing a mathematically grounded reformulation rather than a heuristic hybrid modification. The resulting model integrates Caputo–Katugampola fractional memory, Ornstein–Uhlenbeck mean-reverting drift, and time-decaying Langevin perturbations in a unified multi-scale structure. This combination introduces long-range temporal dependence, stochastic stabilization, and controlled exploration, yielding a stability-oriented search dynamic that progressively transitions from exploratory to deterministic convergence regimes. A unified and fully reproducible experimental pipeline is employed to evaluate FL–PSO across structurally diverse optimization scenarios, including shifted, rotated, hybrid, composite, high-dimensional, and constrained engineering problems. Performance is assessed not only in terms of final objective values, but also through convergence AUC, relative coefficient of variation (rCV), convergence–diversity trade-off metrics, and multiple-comparison-corrected nonparametric statistical tests. The results demonstrate statistically significant and systematic improvements over classical and contemporary PSO variants, particularly in convergence stability and robustness. While state-of-the-art Differential Evolution algorithms often exhibit strong early exploitation, FL–PSO achieves competitive accuracy with lower computational overhead and more regular convergence behavior. These findings position FL–PSO as a stability-enhanced and behaviorally consistent alternative for complex continuous optimization problems.