A conference attendee stands behind a detailed research poster titled "Batch Normalization Alleviates the Spectral Bias in Coordinate Networks" presented by scholars from the School of Electronic Science and Engineering at Nanjing University. The poster, identified as number 92 in the CVPR 2023 conference, elaborates on theoretical analyses, experimental results, and includes a variety of graphs, equations, and reconstructed image samples. On the left side, QR codes and contact information for the corresponding author are provided, while the entire content meticulously combines textual explanations with visual data representations, highlighting the significance of batch normalization in improving the performance of neural networks by addressing spectral bias. The setting is a modern, well-lit conference hall with the focus clearly on the academic content of the poster. Text transcribed from the image: VERSITY rdinate Networks Batch Normalization Alleviates the Spectral Bias in Coordinate Networks uffer from the spectral bias, which makes the model Components while learn high-frequency components mited preformance of signal representation. normalization (BN) to alleviate the spectral bias. mpirically prove that the pathological distribution of uld be significantly improved by using BN (Fig 1), t high-frequency components effectively (Fig 2). Formance in various tasks, such as 2D image fitting, tion, and 5D neural radiance field optimization. 1025 1020 1015 1010 1005 100 10% 1025 PEMLP-1 BN PEMLP-1 PEMLP-5 1000 BN PEMLP-5 1013 2018 1005 10 10-10-10 10 10 10 10 10 Eigenvalues 10-10-10-10-2 10° 102 10 10 Eigenvalues 1. Histograms of NTK Eigenvalues 27.18 dB 28.79 dB Zhicheng Cai, Hao Zhu*, Qiu Shen, Xinran Wang, Xun Cao School of Electronic Science and Engineering, Nanjing University Corresponding author: zhuhao_photo@nju.edu.cn Theoretical Analysis: ➤NTK K = (Vof(X; 0)) Vef (X;0) simulates the training dynamic of network f(X; 0) Network output after n training iterations can be approximated as: Y()(I-ek)Y The training error of different frequencies is determined by the eigenvalues of NTKA: K=QAQ, enkt -Qent Q QY) YeAQTY I: Larger NTK eigenvalues make the corresponding frequency components converge faster. ➤ Calculate the explicit numerical values of NTK matrix with Mean Field Theory: K1 K2 K=(L-1)N K2 K1 K2 K2 L +0(√N), K1= 1 Σ 1 K2 1=1 I=1 K2 K1 Statistical characteristics of NTK eigenvalue distribution: mx~O(N), v~O(N³), Amaz~O(N2) II: The eigenvalues of NTK exhibit a pathological distribution, the maximum eigenvalue is extremely large while most eigenvalues are close to zero, leading to the spectral bias. Add Batch Normalization to the neural network: fBN (X; 0) = H-H 7+ B =E[H], o√E[(H)2] - (E[H])2 KBN (VefBN (X; 0)) V.ƒ³N (X;0) (V.H) V.H HTH(V.H) V₂H 02 To Experimental Results: 92 CVPR SEATTLE, WA JUNE 17-21, 2024 Frequency-specific approximation error with training iterations ReLU ReLU+BN PEMLP PEMLP-BN ➤2D Image Fitting 30.15 dB 20.57 dB 28.74 d 24.81 d GT SIREN ReLU ReLU+BN 3D Shape Representation PEMLP PEMLP-BN (81-83)(7-1) +0(√N) GT SIREN ReLU PEMLP ReLU+BN PEMLP-BN 5251 5D NeRF Optimization 27(1)(-1)) 29.39 28.93 3942 30.36 315 22 Calculate the numerical values of normalized NTK matrix with Mean Field Theory: 1 HTHY KBN=(L-1)N (81-82)(-1) 30.84 dB PEMLP-1 Fig 2. Reconstructed Images 31.31 dB PEMLP-5 Statistical characteristics of BN-NTK eigenvalue distribution: mx~O(N), v~O(N³), O(N)<\maz