neural_tangents.empirical_nngp_fn

neural_tangents.empirical_nngp_fn(f, trace_axes=(-1,), diagonal_axes=())[source]

Returns a function to draw a single sample the NNGP of a given network f.

The Neural Network Gaussian Process (NNGP) kernel is defined as \(f(X_1) f(X_2)^T\), i.e. the outer product of the function outputs.

Warning

Resulting kernel shape is nearly zip(f(x1).shape, f(x2).shape) subject to trace_axes and diagonal_axes parameters, which make certain assumptions about the outputs f(x) that may only be true in the infinite width / infinite number of samples limit, or may not apply to your architecture. For most precise results in the context of linearized training dynamics of a specific finite-width network, set both trace_axes=() and diagonal_axes=() to obtain the kernel exactly of shape zip(f(x1).shape, f(x2).shape).

For networks with multiple (i.e. lists, tuples, PyTrees) outputs, in principal the empirical kernels will have terms measuring the covariance between the outputs. Here, we ignore these cross-terms and consider each output separately. Please raise an issue if this feature is important to you.

Parameters:
  • f (ApplyFn) – the function whose NNGP we are computing. It should have the signature f(params, x, **kwargs) where params is a PyTree, x is a PyTree, and f should also return a PyTree.

  • trace_axes (Union[int, Sequence[int]]) – output axes to trace the output kernel over, i.e. compute only the trace of the covariance along the respective pair of axes (one pair for each axis in trace_axes). This allows to save space and compute if you are only interested in the respective trace, but also improve approximation accuracy if you know that covariance along these pairs of axes converges to a constant * identity matrix in the limit of interest (e.g. infinite width or infinite n_samples). A common use case is the channel / feature / logit axis, since activation slices along such axis are i.i.d. and the respective covariance along the respective pair of axes indeed converges to a constant-diagonal matrix in the infinite width or infinite n_samples limit. Also related to “contracting dimensions” in XLA terms. (https://www.tensorflow.org/xla/operation_semantics#dotgeneral)

  • diagonal_axes (Union[int, Sequence[int]]) – output axes to diagonalize the output kernel over, i.e. compute only the diagonal of the covariance along the respective pair of axes (one pair for each axis in diagonal_axes). This allows to save space and compute, if off-diagonal values along these axes are not needed, but also improve approximation accuracy if their limiting value is known theoretically, e.g. if they vanish in the limit of interest (e.g. infinite width or infinite n_samples). If you further know that on-diagonal values converge to the same constant in your limit of interest, you should specify these axes in trace_axes instead, to save even more compute and gain even more accuracy. A common use case is computing the variance (instead of covariance) along certain axes. Also related to “batch dimensions” in XLA terms. (https://www.tensorflow.org/xla/operation_semantics#dotgeneral)

Return type:

EmpiricalKernelFn

Returns:

A function to draw a single sample the NNGP of a given network f.