Skip to content

Latest commit

 

History

History
95 lines (59 loc) · 3.55 KB

README.md

File metadata and controls

95 lines (59 loc) · 3.55 KB

PyWhy-LLM: Leveraging Large Language Models for Causal Analysis

All Contributors

License

Introduction

PyWhy-LLM is an innovative library designed to augment human expertise by seamlessly integrating Large Language Models (LLMs) into the causal analysis process. It empowers users with access to knowledge previously only available through domain experts. As part of the DoWhy community, we aim to investigate and harness the capabilities of LLMs for enhancing causal analysis process.

Documentation and Tutorials

For detailed usage instructions and tutorials, refer to Notebook.

Installation

To install PyWhy-LLM, you can use pip:

pip install pywhyllm

Usage

PyWhy-LLM seamlessly integrates into your existing causal inference process. Import the necessary classes and start exploring the power of LLM-augmented causal analysis.

from pywhyllm.suggesters.model_suggester import ModelSuggester 
from pywhyllm.suggesters.identification_suggester import IdentificationSuggester
from pywhyllm.suggesters.validation_suggester import ValidationSuggester
from pywhyllm import RelationshipStrategy

Modeler

# Create instance of Modeler
modeler = ModelSuggester('gpt-4')

all_factors = ["smoking", "lung cancer", "exercise habits", "air pollution exposure"]
treatment = "smoking"
outcome = "lung cancer"

# Suggest a list of domain expertises
domain_expertises = modeler.suggest_domain_expertises(all_factors)

# Suggest a set of potential confounders
suggested_confounders = modeler.suggest_confounders(treatment, outcome, all_factors, domain_expertises)

# Suggest pair-wise relationship between variables
suggested_dag = modeler.suggest_relationships(treatment, outcome, all_factors, domain_expertises, RelationshipStrategy.Pairwise)

Identifier

# Create instance of Identifier
identifier = IdentificationSuggester('gpt-4')

# Suggest a backdoor set, mediator set, and iv set
suggested_backdoor = identifier.suggest_backdoor(treatment, outcome, all_factors, domain_expertises)
suggested_mediators = identifier.suggest_mediators(treatment, outcome, all_factors, domain_expertises)
suggested_iv = identifier.suggest_ivs(treatment, outcome, all_factors, domain_expertises)

Validator

# Create instance of Validator
validator = ValidationSuggester('gpt-4')

# Suggest a critique of the edges in provided DAG
suggested_critiques_dag = validator.critique_graph(all_factors, suggested_dag, domain_expertises, RelationshipStrategy.Pairwise)

# Suggest latent confounders
suggested_latent_confounders = validator.suggest_latent_confounders(treatment, outcome, all_factors, domain_expertises)

# Suggest negative controls
suggested_negative_controls = validator.suggest_negative_controls(treatment, outcome, all_factors, domain_expertises)

Contributors ✨

This project welcomes contributions and suggestions. For a guide to contributing and a list of all contributors, check out CONTRIBUTING.md. Our contributor code of conduct is available here.

If you encounter an issue or have a specific request for DoWhy, please raise an issue.