Data

Provides torch dataset wrappers and utilities for modal synthesis

Datasets


source

SingleShapeDataset

 SingleShapeDataset
                     (material:neuralresonator.modal.Material=Material(rho
                     =2700, E=72000000000.0, nu=0.19, alpha=6,
                     beta=1e-07), n_modes:int=32,
                     audio_length_in_seconds:float=0.3,
                     sample_rate:float=32000, n_refinements:int=3,
                     mesh:Optional[skfem.mesh.mesh_2d.Mesh2D]=None)

A synthetic dataset of materials and audio generated from their modal vibrations.

Type Default Details
material Material Material(rho=2700, E=72000000000.0, nu=0.19, alpha=6, beta=1e-07) Material to use
n_modes int 32 Number of modes to use
audio_length_in_seconds float 0.3 Length of audio to render
sample_rate float 32000 Sample rate of audio to render
n_refinements int 3 Number of refinement steps for the mesh
mesh typing.Optional[skfem.mesh.mesh_2d.Mesh2D] None

Dataset generator


source

generate_dataset

 generate_dataset (materials:List[neuralresonator.modal.Material],
                   shapes:List[numpy.ndarray], n_modes:int=32,
                   n_refinements:int=3, scale_factor:float=1,
                   resolution:tuple[int,int]=(64, 64),
                   without_boundary_nodes:bool=True,
                   data_dir:pathlib.Path=Path('data'))

Generates a dataset of shapes and materials.

Type Default Details
materials typing.List[neuralresonator.modal.Material] List of materials
shapes typing.List[numpy.ndarray] List of shapes
n_modes int 32 Number of modes
n_refinements int 3 Number of refinements
scale_factor float 1 Scale factor for the mesh
resolution tuple (64, 64) Resolution of the occupancy map
without_boundary_nodes bool True Whether to remove the boundary nodes
data_dir Path data Directory to save the data

source

generate_random_dataset

 generate_random_dataset (n_shapes:int, n_materials:int, n_vertices:int,
                          **kwargs)

Generates a dataset of shapes and materials.

Type Details
n_shapes int Number of shapes to generate
n_materials int Number of materials to generate
n_vertices int Number of vertices for the shape
kwargs

Generate a random dataset

n_shapes = 10
n_materials = 1
n_vertices = 13
scale_factor = 1
n_refinements = 3

data_dir = Path("data")

if not data_dir.exists():
    data_dir.mkdir()

generate_random_dataset(
    n_shapes=n_shapes,
    n_materials=n_materials,
    materials=[MATERIALS['polycarbonate']],
    n_vertices=n_vertices,
    n_modes=64,
    scale_factor=scale_factor,
    n_refinements=n_refinements,
)
Finish generating shapes and materials
Saved 00009_00000.npy: 100%|██████████| 10/10 [00:06<00:00,  1.55it/s]

source

MultiShapeMultiMaterialDataset

 MultiShapeMultiMaterialDataset (index_map_path:pathlib.Path,
                                 audio_length_in_seconds:float=0.3,
                                 sample_rate:int=16000)

A synthetic dataset of materials, shapes and audio generated from their modal vibrations.

Type Default Details
index_map_path Path Directory to read data from
audio_length_in_seconds float 0.3 Length of audio to render
sample_rate int 16000 Sample rate of audio to render
Returns None

Load a dataset

sr = 16000
dataset = MultiShapeMultiMaterialDataset(
    Path("data/index_map.csv"),
    sample_rate=sr,
)
data = dataset[np.random.choice(len(dataset))]

mask = data["mask"]
scaled_coords = data["coords"] * mask.shape[1]
material = data["material_params"]
audio = data["audio"]
print(audio.shape)
print(f"Material: {material}")

fig, ax = plt.subplots(1, 2, figsize=(10, 5))
ax[0].imshow(mask[0])
ax[0].scatter(scaled_coords[0], scaled_coords[1], c="r")
ax[1].plot(audio)
ax[1].set_title("Audio")
plt.show()

save_and_display_audio(audio, f"dataload.wav", sr)
(4800,)
Material: [ 0.07263158  0.0014014   0.74       -0.01724138  0.1959799 ]

Data module


source

MultiShapeMultiMaterialDataModule

 MultiShapeMultiMaterialDataModule
                                    (train_index_map_path:Union[pathlib.Pa
                                    th,str], val_index_map_path:Union[path
                                    lib.Path,str], test_index_map_path:Uni
                                    on[pathlib.Path,str],
                                    audio_length_in_seconds:float=0.3,
                                    sample_rate:int=16000, material_ranges
                                    :neuralresonator.modal.MaterialRanges=
                                    MaterialRanges(rho=(500.0, 10000.0),
                                    E=(1000000000.0, 1000000000000.0),
                                    nu=(0.0, 0.5), alpha=(1.0, 30.0),
                                    beta=(1e-08, 2e-06)),
                                    batch_size:int=1, num_workers:int=0,
                                    pin_memory:bool=False)

A data module for the MultiShapeMultiMaterialDataset

Type Default Details
train_index_map_path typing.Union[pathlib.Path, str] Directory containing training data
val_index_map_path typing.Union[pathlib.Path, str] Directory containing validation data
test_index_map_path typing.Union[pathlib.Path, str] Directory containing test data
audio_length_in_seconds float 0.3 Length of audio to render
sample_rate int 16000 Sample rate of audio to render
material_ranges MaterialRanges MaterialRanges(rho=(500.0, 10000.0), E=(1000000000.0, 1000000000000.0), nu=(0.0, 0.5), alpha=(1.0, 30.0), beta=(1e-08, 2e-06)) Ranges of material parameters
batch_size int 1 Batch size
num_workers int 0 Number of workers for data loading
pin_memory bool False Whether to pin memory for data loading
Returns None

Create a data module

dataset_args = dict(
    audio_length_in_seconds=0.3,
    sample_rate=16000,
)

datamodule = MultiShapeMultiMaterialDataModule(
    train_index_map_path="data/index_map.csv",
    val_index_map_path="data/index_map.csv",
    test_index_map_path="data/index_map.csv",
)