This repository contains the frontend implementation for Polyphony, our interactive transfer-learning framework for reference-based single-cell data analysis.
polyphony-vis is implemented using the Vitessce framework and its plugin APIs.
In this repository, run:
npm run start
Developed under Node v14.0.0 and NPM v6.14.16.
Run polyphony (backend)
Currently works with polyphony/dc09630
Clone the https://github.com/scPolyphony/polyphony repository, then:
cd polyphony
git checkout dc09630
mamba env create -f environment.yml # or conda
conda activate polyphony-env
pip install -e .
mkdir data
In the root of the polyphony
repository, run:
conda activate polyphony-env
polyphony --experiment case-1 --save --load_exist --port 7778
All plugin view types assume there are two dataset
coordination scopes (named REFERENCE
and QUERY
).
Registered as qrComparisonScatterplot
.
Registered as qrCellSets
.
Registered as qrScores
.
App header view, provides controls for selecting anchor sets and updating the model.
Registered as qrStatus
.
Plugin coordination types
Is the user currently lassoing?
Either null
or 'lasso'
. By default, null
.
Which anchor set is the user currently editing?
By default, null
.
Which anchor set to focus? (caused by click in the Anchor Set View)
By default, null
.
Which anchor set to highlight? (caused by hover in Anchor Set View)
By default, null
.
Whether the embedding is visible in the comparison view. Intended to be used to show the reference or query only.
Boolean. By default, true
.
How to encode cells on the scatterplot?
Either 'scatterplot'
, 'heatmap'
, 'contour'
, 'scatterplot-and-contour'
. By default, 'scatterplot'
.
Whether the anchor link glyphs between corresponding query and reference sets are visible.
Boolean. By default, false
.
How to sort and filter the list of anchor sets in the anchor set view.
By default, null
.
Should the rendering preset buttons be rendered in the comparison view?
Boolean. By default, true
.
Should the legends be rendered in the comparison view?
Boolean. By default, true
.
Boolean. By default, false
.
Should the anchor link glyph line width be mapped to the anchor set score?
Either null
or 'anchorSetScores'
. By default, 'anchorSetScores'
.
The same idea as the built-in cellColorEncoding
, but can take on the value dataset
.
Either 'dataset'
, 'geneSelection'
, 'cellSetSelection'
. By default, 'cellSetSelection'
.
Holds the state (loading, success, error) of requests to the anchor API endpoint.
Object { iteration: 1, status: 'success', message: null }
.
Holds the state (loading, success, error) of requests to the model API endpoint.
Object { iteration: 1, status: 'success', message: null }
.
To cite Polyphony in your work, please use:
@article{cheng2022polyphony,
title = {Polyphony: an {Interactive} {Transfer} {Learning} {Framework} for {Single}-{Cell} {Data} {Analysis}},
author = {Cheng, Furui and Keller, Mark S. and Qu, Huamin and Gehlenborg, Nils and Wang, Qianwen},
journal = {OSF Preprints},
year = {2022},
month = apr,
doi = {10.31219/osf.io/b76nt},
url = {https://osf.io/b76nt/},
language = {en}
}
polyphony-vis was originally implemented as a fork of the Vitessce repository at https://github.com/ChengFR/vitessce/tree/figure-making (see https://github.com/vitessce/vitessce/compare/master...ChengFR:figure-making) before refactoring into the plugin implementation here. Some of the utility functions have been copied from Vitessce.