Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Speed up tests #430

Draft
wants to merge 8 commits into
base: main
Choose a base branch
from
Draft

Speed up tests #430

wants to merge 8 commits into from

Conversation

frostedoyster
Copy link
Collaborator

@frostedoyster frostedoyster commented Dec 16, 2024

Closes #340


📚 Documentation preview 📚: https://metatrain--430.org.readthedocs.build/en/430/

@frostedoyster frostedoyster changed the base branch from main to eval-batch-size December 16, 2024 12:43
@frostedoyster frostedoyster requested a review from Luthaf December 16, 2024 14:08
@Luthaf
Copy link
Member

Luthaf commented Dec 17, 2024

This is nice, and cut the tests time in three for me!

But I get a test failure on macOS:

    def test_llpr(tmpdir):

        model = load_model(
            str(RESOURCES_PATH / "model-64-bit.pt"),
            extensions_directory=str(RESOURCES_PATH / "extensions/"),
        )
        qm9_systems = read_systems(RESOURCES_PATH / "qm9_reduced_100.xyz")
        target_config = {
            "energy": {
                "quantity": "energy",
                "read_from": str(RESOURCES_PATH / "qm9_reduced_100.xyz"),
                "reader": "ase",
                "key": "U0",
                "unit": "kcal/mol",
                "type": "scalar",
                "per_atom": False,
                "num_subtargets": 1,
                "forces": False,
                "stress": False,
                "virial": False,
            },
        }
        targets, _ = read_targets(target_config)
        requested_neighbor_lists = get_requested_neighbor_lists(model)
        qm9_systems = [
            get_system_with_neighbor_lists(system, requested_neighbor_lists)
            for system in qm9_systems
        ]
        dataset = Dataset.from_dict({"system": qm9_systems, **targets})
        dataloader = torch.utils.data.DataLoader(
            dataset,
            batch_size=10,
            shuffle=False,
            collate_fn=collate_fn,
        )

        llpr_model = LLPRUncertaintyModel(model)
        llpr_model.compute_covariance(dataloader)
        llpr_model.compute_inverse_covariance()

        exported_model = MetatensorAtomisticModel(
            llpr_model.eval(),
            ModelMetadata(),
            llpr_model.capabilities,
        )

        evaluation_options = ModelEvaluationOptions(
            length_unit="angstrom",
            outputs={
                "mtt::aux::energy_uncertainty": ModelOutput(per_atom=True),
                "energy": ModelOutput(per_atom=True),
                "mtt::aux::energy_last_layer_features": ModelOutput(per_atom=True),
            },
            selected_atoms=None,
        )

        outputs = exported_model(
            qm9_systems[:5], evaluation_options, check_consistency=True
        )

        assert "mtt::aux::energy_uncertainty" in outputs
        assert "energy" in outputs
        assert "mtt::aux::energy_last_layer_features" in outputs

        assert outputs["mtt::aux::energy_uncertainty"].block().samples.names == [
            "system",
            "atom",
        ]
        assert outputs["energy"].block().samples.names == ["system", "atom"]
        assert outputs["mtt::aux::energy_last_layer_features"].block().samples.names == [
            "system",
            "atom",
        ]

        # Now test the ensemble approach
        params = []  # One per element, SOAP-BPNN
        for name, param in llpr_model.model.named_parameters():
            if "last_layers" in name and "energy" in name:
                params.append(param.squeeze())
        weights = torch.cat(params)

        n_ensemble_members = 10000
        llpr_model.calibrate(dataloader)
        llpr_model.generate_ensemble({"energy": weights}, n_ensemble_members)
        assert "energy_ensemble" in llpr_model.capabilities.outputs

        exported_model = MetatensorAtomisticModel(
            llpr_model.eval(),
            ModelMetadata(),
            llpr_model.capabilities,
        )

        exported_model.save(
            file=str(tmpdir / "llpr_model.pt"),
            collect_extensions=str(tmpdir / "extensions"),
        )
        llpr_model = load_model(
            str(tmpdir / "llpr_model.pt"), extensions_directory=str(tmpdir / "extensions")
        )

        evaluation_options = ModelEvaluationOptions(
            length_unit="angstrom",
            outputs={
                "energy": ModelOutput(per_atom=False),
                "mtt::aux::energy_uncertainty": ModelOutput(per_atom=False),
                "energy_ensemble": ModelOutput(per_atom=False),
            },
            selected_atoms=None,
        )
        outputs = exported_model(
            qm9_systems[:5], evaluation_options, check_consistency=True
        )

        assert "mtt::aux::energy_uncertainty" in outputs
        assert "energy_ensemble" in outputs

        analytical_uncertainty = outputs["mtt::aux::energy_uncertainty"].block().values
        ensemble_uncertainty = torch.var(
            outputs["energy_ensemble"].block().values, dim=1, keepdim=True
        )

>       torch.testing.assert_close(
            analytical_uncertainty, ensemble_uncertainty, rtol=1e-2, atol=1e-2
        )
E       AssertionError: Tensor-likes are not close!
E
E       Mismatched elements: 1 / 5 (20.0%)
E       Greatest absolute difference: 0.015089977605787154 at index (0, 0) (up to 0.01 allowed)
E       Greatest relative difference: 0.4452166739839174 at index (0, 0) (up to 0.01 allowed)

utils/test_llpr.py:145: AssertionError

@frostedoyster
Copy link
Collaborator Author

Ok! My fault for the failing test, you can try again now

@frostedoyster frostedoyster force-pushed the speed-up-tests branch 2 times, most recently from c92b6ed to 205fa40 Compare December 17, 2024 10:46
Base automatically changed from eval-batch-size to main December 17, 2024 14:07
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Tests are too slow
2 participants