PhD Public Defense
Network Topology Inference: Accounting for Directionality, Learning Tasks, and Data Streams
Seyed Saman Saboksayr
Supervised by Gonzalo Mateos
Thursday, October 17, 2024
2:30 p.m.
224 Hopeman Building
Graph Signal Processing (GSP) plays a crucial role in addressing the growing need for information processing across networks, especially in tasks like supervised classification. However, the success of GSP in such tasks hinges on accurately identifying the underlying relational structures, which are often not readily available and must be inferred from observed signals. Assuming signal smoothness over latent class-specific graphs, we propose a novel framework for discriminative graph learning. This framework ensures signals remain smooth within their class-specific graphs and non-smooth relative to graphs of other classes. The learned representations are processed via the graph Fourier transform (GFT) to extract discriminative features for down- stream learning tasks.
To handle dynamic environments and real-time topology inference, we extend this framework by incorporating recent advances in GSP and time- varying convex optimization. We propose a proximal gradient (PG) method that adapts to on-the-fly data acquisition, marking the first approach to online graph learning from smooth signals, where the underlying network evolves slowly. We validate the proposed frameworks through experiments on both synthetic and real data, demonstrating state-of-the-art performance in emotion classification using EEG data, network-based seizure detection with ECoG data, and analyzing stock market behavior during the COVID- 19 pandemic.
We further develop a fast dual-based proximal gradient algorithm for solving smoothness-regularized network inverse problems, offering global convergence without step-size tuning. An online version of this algorithm is also introduced, efficiently solving time-varying inverse problems and tracking changes in network connectivity.
Additionally, we address the combinatorial challenge of learning Directed Acyclic Graph (DAG) structures from observational data based on a linear Structural Equation Model (SEM). Current methods often rely on Lasso- type score functions, which require penalty retuning when noise variances shift implicitly rely on limiting homoscedasticity assumptions. To overcome these limitations, we introduce CoLiDE (Concomitant Linear DAG Estimation), a convex score function for sparsity-aware DAG learning that decouples sparsity from noise levels. CoLiDE integrates nonconvex acyclicity penalties, enhancing stability and outperforming existing methods without incurring added complexity, especially when the DAGs are larger, and the noise level profile is heterogeneous.