source
FC
FC (input_size:int, output_size:int, hidden_sizes:List[int],
activation:torch.nn.modules.module.Module=LeakyReLU(negative_slope=0.
2, inplace=True), layer_norm:bool=False)
A fully connected network with an activation function and optional layer normalization
input_size
int
The size of the input
output_size
int
The size of the output
hidden_sizes
typing.List[int]
The sizes of the hidden layers
activation
Module
LeakyReLU(negative_slope=0.2, inplace=True)
layer_norm
bool
False
Whether to use layer normalization
source
FCBlock
FCBlock (input_size:int, output_size:int,
activation:torch.nn.modules.module.Module=LeakyReLU(negative_slo
pe=0.2, inplace=True), layer_norm:bool=False)
A fully connected block with an activation function and optional layer normalization
input_size
int
The size of the input
output_size
int
The size of the output
activation
Module
LeakyReLU(negative_slope=0.2, inplace=True)
layer_norm
bool
False
Whether to use layer normalization
Test
model = FC(
input_size= 7 ,
output_size= 100 ,
hidden_sizes= [100 , 100 ],
)
x = torch.randn(1 , 7 )
y = model(x)
assert y.shape == torch.Size([1 , 100 ])
summary(model, input_size= (1 , 7 ))
/home/diaz/anaconda3/envs/modal/lib/python3.10/site-packages/torchinfo/torchinfo.py:477: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage()
action_fn=lambda data: sys.getsizeof(data.storage()),
/home/diaz/anaconda3/envs/modal/lib/python3.10/site-packages/torch/storage.py:665: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage()
return super().__sizeof__() + self.nbytes()
==========================================================================================
Layer (type:depth-idx) Output Shape Param #
==========================================================================================
FC [1, 100] --
├─Sequential: 1-1 [1, 100] --
│ └─FCBlock: 2-1 [1, 100] --
│ │ └─Linear: 3-1 [1, 100] 800
│ │ └─Identity: 3-2 [1, 100] --
│ └─FCBlock: 2-4 -- (recursive)
│ │ └─LeakyReLU: 3-3 [1, 100] --
│ └─FCBlock: 2-3 [1, 100] --
│ │ └─Linear: 3-4 [1, 100] 10,100
│ │ └─Identity: 3-5 [1, 100] --
│ └─FCBlock: 2-4 -- (recursive)
│ │ └─LeakyReLU: 3-6 [1, 100] --
│ └─FCBlock: 2-5 [1, 100] --
│ │ └─Linear: 3-7 [1, 100] 10,100
│ │ └─Identity: 3-8 [1, 100] --
│ │ └─LeakyReLU: 3-9 [1, 100] --
│ └─Linear: 2-6 [1, 100] 10,100
==========================================================================================
Total params: 31,100
Trainable params: 31,100
Non-trainable params: 0
Total mult-adds (M): 0.03
==========================================================================================
Input size (MB): 0.00
Forward/backward pass size (MB): 0.00
Params size (MB): 0.12
Estimated Total Size (MB): 0.13
==========================================================================================
source
CoefficientsFC
CoefficientsFC (n_parallel:int=32, n_biquads:int=2,
initial_gain_scale:float=1.0, tahn_c:float=1.0,
tanh_d:float=1.0, use_zp_init:bool=False,
initial_pole_mag:float=1.5, initial_zero_mag:float=0,
**kwargs)
A wrapper around the FC class that outputs the coefficients of a filterbank
Test
n_parallel = 32
n_biquads = 2
n_coeffs = 6
n_batches = 2
coefficients_fc = CoefficientsFC(
input_size= 1007 ,
hidden_sizes= [100 , 100 ],
n_parallel= n_parallel,
n_biquads= n_biquads,
initial_gain_scale= 1.0 ,
)
x = torch.randn(n_batches, 1007 )
y = coefficients_fc(x)
assert y.shape == torch.Size([n_batches, n_parallel, n_biquads, n_coeffs])