February 14, 2025
Similar papers 2
May 1, 2016
The modern data analyst must cope with data encoded in various forms, vectors, matrices, strings, graphs, or more. Consequently, statistical and machine learning models tailored to different data encodings are important. We focus on data encoded as normalized vectors, so that their "direction" is more important than their magnitude. Specifically, we consider high-dimensional vectors that lie either on the surface of the unit hypersphere or on the real projective plane. For su...
November 1, 2022
In this paper, we prove some zero density theorems for certain families of Dirichlet $L$-functions. More specifically, the subjects of our interest are the collections of Dirichlet $L$-functions associated with characters to moduli from certain sparse sets and of certain fixed orders.
September 22, 2021
Generalizing previous work of Iwaniec, Luo, and Sarnak (2000), we use information from one-level density theorems to estimate the proportion of non-vanishing of $L$-functions in a family at a low-lying height on the critical line (measured by the analytic conductor). To solve the Fourier optimization problems that arise, we provide a unified framework based on the theory of reproducing kernel Hilbert spaces of entire functions (there is one such space associated to each symme...
July 12, 2016
This thesis determines some of the implications of non-universal and emergent universal statistics on arithmetic correlations and fluctuations of arithmetic functions, in particular correlations amongst prime numbers and the variance of the expected number of prime numbers over short intervals are generalised by associating these concepts to $L$-functions arising from number theoretic objects.
November 9, 1994
As we have shown several years ago [Y2], zeros of $L(s, \Delta )$ and $L^(2)(s, \Delta )$ can be calculated quite efficiently by a certain experimental method. Here $\Delta$ denotes the cusp form of weight 12 with respect to SL$(2, Z)$ and $L(s, \Delta )$ (resp. $L^(2)(s, \Delta )$) denotes the standard (resp. symmetric square) $L$-function attached to $\Delta$. The purpose of this paper is to show that this method can be applied to a wide class of $L$-functions so that we ca...
October 22, 2000
This paper describes some validated numerics aspects of Riemann zeta function, Dirichlet L-functions, Dedekind zeta functions and Hasse-Weil L-functions.
August 22, 2023
This paper presents a novel, interdisciplinary study that leverages a Machine Learning (ML) assisted framework to explore the geometry of affine Deligne-Lusztig varieties (ADLV). The primary objective is to investigate the nonemptiness pattern, dimension and enumeration of irreducible components of ADLV. Our proposed framework demonstrates a recursive pipeline of data generation, model training, pattern analysis, and human examination, presenting an intricate interplay betwee...
October 25, 2023
In this paper, we use the Weyl-bound for Dirichlet $L$-functions to derive zero-density estimates for $L$-functions associated to families of fixed-order Dirichlet characters. The results improve on previous bounds given by the author when $\sigma$ is sufficiently distanced from the critical line.
November 8, 2024
This paper demonstrates that grokking behavior in modular arithmetic with a modulus P in a neural network can be controlled by modifying the profile of the activation function as well as the depth and width of the model. Plotting the even PCA projections of the weights of the last NN layer against their odd projections further yields patterns which become significantly more uniform when the nonlinearity is increased by incrementing the number of layers. These patterns can be ...
February 20, 2025
Deep neural networks have reshaped modern machine learning by learning powerful latent representations that often align with the manifold hypothesis: high-dimensional data lie on lower-dimensional manifolds. In this paper, we establish a connection between manifold learning and computational algebra by demonstrating how vanishing ideals can characterize the latent manifolds of deep networks. To that end, we propose a new neural architecture that (i) truncates a pretrained net...