Skip to content

Architectures

Regular Architectures

MultiLayerPerceptron(dim_in, dim_hidden, n_layers, activ=nn.ReLU, final_activ=nn.Identity)

Simple Multi-layer Perceptron.

Parameters:

Name Type Description Default
dim_in int

dimension of the input vector. Usually 2 or 3 for neural implicits.

required
dim_hidden int

dimension of the hidden layers.

required
n_layers int

number of hidden layers.

required
activ Module

Activation function of the network. Defaults to torch.nn.ReLU.

ReLU
final_activ Module

Final activation of the network, to be applied to the output before returning the value. Defaults to torch.nn.Identity.

Identity
References
  • On the Effectiveness of Weight-Encoded Neural Implicit 3D Shapes, Davies et al., 2021

SirenNet(dim_in, dim_hidden, n_layers, w0=6.0, w0_first_layer=30.0)

Code adapted from https://github.com/MClemot/SkeletonLearning/blob/main/siren_nn.py

Parameters:

Name Type Description Default
dim_in int

dimension of the input vector. Usually 2 or 3 for neural implicits.

required
dim_hidden int

dimension of the hidden layers.

required
n_layers int

number of hidden layers.

required
w0 float

Frequency of the activation function. Defaults to 6..

6.0
w0_first_layer float

Frequency of the activation function in the first layer. Defaults to 30.

30.0
References

Implicit Neural Representations with Periodic Activation Functions, Sitzmann et al., 2020

PhaseNet(dim_in, dim_hidden, n_layers, FF=True, skip_in=(), geometric_init=True, radius_init=1, beta=100)

Bases: Module

Code adapted from the original PHASE implementation

Parameters:

Name Type Description Default
dim_in _type_

description

required
dim_hidden _type_

description

required
n_layers _type_

description

required
FF bool

description. Defaults to True.

True
skip_in tuple

description. Defaults to ().

()
geometric_init bool

description. Defaults to True.

True
radius_init int

description. Defaults to 1.

1
beta int

description. Defaults to 100.

100
References

Lipschitz Architectures

CPLLipschitzDenseLayer(in_features, inner_dim=-1, activation=nn.ReLU(), power_it_max_iter=10)

Bases: Module

l2norm_power_iteration(max_iter)

Compute the largest singular value with a small number of iteration for training

DenseLipAOL(dim_in, dim_hidden, n_layers, coeff_lip=1.0, activation=nn.ReLU())

Neural network made of Semi-Definite Programming neural layers, as proposed by [1]. Using a square matrix \(W \in \mathbb{R}^{k \times k}\) and a bias vector \(b \in \mathbb{R}^k\), each layer is defined as:

\[x \mapsto \sigma(WT^{-1/2}x + b)\]

where \(T\) is a diagonal matrix of size \(\mathbb{R}^{k \times k}\):

\[ T_{ii} = \sum_{j=1}^k \left| (W^T W)_{ij}\right|\]

and \(\sigma(x) = \max(0, x)\) is any 1-Lipschitz function.

Parameters:

Name Type Description Default
dim_in int

dimension of the input vector. Usually 2 or 3 for neural implicits.

required
dim_hidden int

dimension of the hidden layers.

required
n_layers int

number of hidden layers.

required
coeff_lip float

Lipschitz constant. Defaults to 1.

1.0
activation Module

Activation function to consider. Defaults to nn.ReLU().

ReLU()
References

Almost-Orthogonal Layers for Efficient General-Purpose Lipschitz Networks, Prach and Lampert, 2022

DenseLipBjorck(dim_in, dim_hidden, n_layers, group_sort_size=2, bias=True, k_coeff_lip=1.0)

A Lipschitz neural architecture based on Björck's orthonormalization. The implementation is based on the SpectralLinear layer of the deel-torchlip library.

Parameters:

Name Type Description Default
dim_in int

dimension of the input vector. Usually 2 or 3 for neural implicits.

required
dim_hidden int

dimension of the hidden layers.

required
n_layers int

number of hidden layers.

required
group_sort_size int

Size of the GroupSort activation function. If set to zero, the activation will be a FullSort. Defaults to 2 (minimum).

2
bias bool

whether to include bias vectors in the layers. Defaults to True.

True
k_coeff_lip float

Lipschitz constant of the network. Defaults to 1.

1.0
References

DenseLipCPL(dim_in, dim_hidden, n_layers, activation=nn.ReLU(), with_group_sort=True)

Neural network made of Convex Potential layers, as proposed by [1]. Using a square matrix \(W \in \mathbb{R}^{k \times k}\) and a bias vector \(b \in \mathbb{R}^k\), each layer is defined as:

\[x \mapsto x - \frac{2}{||W||_2} W^T \sigma(Wx + b)\]

where \(\sigma(x) = \max(0, x)\).

Parameters:

Name Type Description Default
dim_in int

dimension of the input vector. Usually 2 or 3 for neural implicits.

required
dim_hidden int

dimension of the hidden layers.

required
n_layers int

number of hidden layers.

required
activation Module

Activation function to consider. Defaults to nn.ReLU().

ReLU()
with_group_sort bool

whether to add a GroupSort2 between each layer to slightly increase accuracy. Defaults to True

True
References

A Dynamical System Perspective for Lipschitz Neural Networks, Meunier et al., 2022

DenseLipSDP(dim_in, dim_hidden, n_layers, coeff_lip=1.0, activation=nn.ReLU(), with_group_sort=True)

Neural network made of Semi-Definite Programming neural layers, as proposed by [1]. Using a square matrix \(W \in \mathbb{R}^{k \times k}\), a bias vector \(b \in \mathbb{R}^k\) and an additional vector \(q \in \mathbb{R}^k\) as parameters, each layer is defined as:

\[x \mapsto x - 2WT^{-1} \sigma(W^Tx + b)\]

where \(T\) is a diagonal matrix of size \(\mathbb{R}^{k \times k}\):

\[ T_{ii} = \sum_{j=1}^k \left| (W^T W)_{ij} \,\exp(q_j - q_i) \right|\]

and \(\sigma(x) = \max(0, x)\) is the rectified linear unit (ReLU) function.

Parameters:

Name Type Description Default
dim_in int

dimension of the input vector. Usually 2 or 3 for neural implicits.

required
dim_hidden int

dimension of the hidden layers.

required
n_layers int

number of hidden layers.

required
coeff_lip float

Lipschitz constant. Defaults to 1.

1.0
activation Module

Activation function. This architecture is proven to be 1-Lipschitz for ReLU, sigmoid and tanh. Defaults to ReLU.

ReLU()
with_group_sort bool

whether to add a GroupSort2 between each layer to slightly increase accuracy. Defaults to True

True
References

[1] A Unified Algebraic Perspective on Lipschitz Neural Networks, Araujo et al., 2023
[2] https://github.com/deel-ai/orthogonium/blob/main/orthogonium/layers/conv/SLL/sll_layer.py

Utilities

count_parameters(model, with_grad=True)

Counts the number of optimizable parameters inside a pytorch neural network.

Parameters:

Name Type Description Default
model Module

the neural model to consider

required
with_grad bool

whether to count all the parameters in the model's tensors, or only optimizable parameters (with requires_grad=True). Defaults to True.

True

Returns:

Name Type Description
int int

number of optimizable parameters in the model