A detailed research poster titled "Batch Normalization Alleviates the Spectral Bias in Coordinate Networks" from the School of Electronic Science and Engineering at Nanjing University is displayed. The poster is authored by Zhicheng Cai, Hao Zhu, Qiu Shen, Xinran Wang, and Xun Cao. The introduction explains the spectral bias in coordinate networks and highlights the contribution of using batch normalization (BN) to address this issue. Theoretical analysis, figures, and experimental results are presented to support their findings. Various graphs and image reconstructions demonstrate the effectiveness of BN in improving the convergence of high-frequency components and representational performance. QR codes and contact information for the corresponding author are available. The background shows an indoor exhibition setting with other attendees and posters visible. Text transcribed from the image: Alla 南京大学 NANJING UNIVERSITY Introduction: ➤ Spectral Bias of Coordinate Networks Batch Normalization Alleviates the Spectral Bias in Coordinate Networks Coordinate networks suffer from the spectral bias, which makes the model prefer low-frequency components while learn high-frequency components slowly, resulting in limited preformance of signal representation. ➤ Contribution: 1.Propose to use batch normalization (BN) to alleviate the spectral bias. 2.Theoretically and empirically prove that the pathological distribution of NTK eigenvalues could be significantly improved by using BN (Fig 1), enabling to represent high-frequency components effectively (Fig 2). 3.State-of-the-art performance in various tasks, such as 2D image fitting, 3D shape representation, and 5D neural radiance field optimization. 1003 1020 MLP BN+MLP 1023 10 1015 101s 1010 1010 1003 100 1000 10 10 10 10 10 102 10 10 without BN PEMLPS SN PEMLP-5 PEMLP-1 BN-PEMLP-1 10 100 10 101 10 10 10-10-10-10-10 10 10 10 Eigenvalues 10 10 10 10 10 10 10 10 Eigenvalues Fig 1. Histograms of NTK Eigenvalues Zhicheng Cai, Hao Zhu*, Qiu Shen, Xinran Wang, Xun Cao School of Electronic Science and Engineering, Nanjing University Corresponding author: zhuhao_photo@nju.edu.cn Theoretical Analysis: ➤NTK K = (Vef(X; 0)) Vef (X;0) simulates the training dynamic of network f(X; 0) Network output after n training iterations can be approximated as: Y()(I-e-Kn)Y The training error of different frequencies is determined by the eigenvalues of NTK A: K=QAQ, eKt=QeQT, QTY(n) _YeQTY I: Larger NTK eigenvalues make the corresponding frequency components converge faster. ➤Calculate the explicit numerical values of NTK matrix with Mean Field Theory: K1 K2 K2 K1 K=(L-1)N K2 L +0(√√N), K₁ = L-1 K2 K2 K1 Statistical characteristics of NTK eigenvalue distribution: mx~O(N), vx~O(N³), Amaz~O(N²) 1 1 II: The eigenvalues of NTK exhibit a pathological distribution, the maximum eigenvalue is extremely large while most eigenvalues are close to zero, leading to the spectral bias. >Add Batch Normalization to the neural network: fBN (X; 0) = HL μ + B KBN 02 =(Vef (X; 0)) Vef BN (X;0) HH(VH) V.H (V.H) V.H To =E[H], σ =√√E[(H4)2] - (E[H])² Calculate the numerical values of normalized NTK matrix with Mean Field Theory: Experimental Resu ➤ Frequency-specific a ReLU 2D Image Fitting GT 30.15 db SIREN 3D Shape Represent with BN 23.25 dB 27.18 dB 28.79 dB KBN = (1-1)N(1-1) H H T 152751 (81-83) (7-1) +0(√Ñ) GT SIREN 5D NeRF Optimiza 525 (1-2)(-1) 29.39- 30.05 dB 30.84 dB 31.31 dB MLP PEMLP-1 PEMLP-5 Fig 2. Reconstructed Images Statistical characteristics of BN-NTK eigenvalue distribution: mx~O(N), v~O(N), O(N)