PVRA (Publically Verifiable Remote Attestation) aims to provide a framework for developers to bootstrap a range of auditting capabilities and security properties for their enclave based applications that are otherwise vulnerable.
The goal of this template is to provide a clean interface with PVRA framework components and an intuitive means of writing these applications. We have four example applications to showcase: VirtualStatusCard, HeatMap, EVoting, and SecureDataTransfer. To browse the trace of a PVRA application refer to ./applications/
. VSC is currently at 229 LoC and HeatMap at 205 LoC.
Run an existing application in docker without CCF in SGX simulation mode
- set Environment variables
export PROJECT_ROOT=$(pwd)
export CCF_ENABLE=0
export SGX_SPID=None
export IAS_PRIMARY_KEY=None
export APP_NAME=<sdt or heatmap or vsc>
export SGX_MODE=SW
-
Set
NUM_USERS
- sdt functionality tests requires 5 users:
export NUM_USERS=5
- heatmap expects 4 users
export NUM_USERS=4
- vsc expects 8 users:
export NUM_USERS=8
- sdt functionality tests requires 5 users:
-
build and run docker image
cd $PROJECT_ROOT
./setup.sh -a $APP_NAME
cd $PROJECT_ROOT/docker
./build.sh
./run.sh
There are three application-specific files that need to be modified for implementing an application.
-
appPVRA.h
This is the header file for the application; it defines the types of commands the enclave processes, the structure of command inputs/outputs, and the structure of application data. -
appPVRA.c
This is enclave executable application code. Every command should have an associated execution kernel. There are two auxillary functions that are required:-
initES()
initializes the application data structures - write application commands and set
NUM_COMMANDS
- must use function signature
struct cResponse pvraCommandName(struct ES *enclave_state, struct cInputs *CI, uint32_t uidx)
- optional: add admin specific commands by defining
NUM_ADMIN_COMMANDS
inappPVRA.h
and initializing them to the end of function list
- must use function signature
-
initFP()
which associates functions to enumerated commands (COMMAND0-COMMANDNUM_COMMANDS+NUM_ADMIN_COMMANDS); - optional: user accounts auditing with merkle tree by setting
#define MERKLE_TREE
inappPVRA.h
-
get_user_leaf
generate a list of leaf nodes structs for each user account. Each leaf node should containuidx
field cooresponding to theuidx
input into PVRA commands.
-
-
-
application.py
This is a python file that specifies generates user input data and admin input data for deploying the application-
get_test_data
Returns data for testing application functionality -
get_test_data_omission
Returns data for testing data omission scenarios -
format_command
convert test data to serialized Cstruct private_command
. -
print_cResponse
convert serialized Cstruct cResponse
to a printable python string - optional: user accounts auditing with merkle tree by setting
constants.MERKLE(True)
inapplication.py
-
print_leaf
convert serialized C struct generated byget_user_leaf
to a printable python string -
get_leaf
convert serialized C struct generated byget_user_leaf
to a python dictionary (withuidx
field)
-
-
- Environment variables
export PROJECT_ROOT=$(pwd)
export CCF_ENABLE=<0 or 1>
export SGX_SPID=<SGX_SPID>
export IAS_PRIMARY_KEY=<IAS_PRIMARY_KEY>
export NUM_USERS=<NUM_USERS>
export APP_NAME=<APP_NAME>
export SGX_MODE=<HW or SW>
export CCF_PATH=<optional> #todo how to run CCF
-
CCF public key is hardcoded in the enclave image as a root of trust and must be updated in enclave/initPVRA.c. In order to run the demo without SCS protection, one can
export CCF_ENABLE=0
._ -
In order to run an existing application pass the APP_NAME to
./setup.sh
script.setup.sh
takes as arguments-a <APP_NAME>
the name of the directory in$PROJECT_ROOT/application
and-c <CCF_PATH>
the directory that contains the credentials for communicating with the running CCF network. If no arguments are passed it uses the VSC application.--clean
undoes the effects of the script._
./setup.sh -a $APP_NAME -c $CCF_PATH
- For more than 9 users change value after
"--accounts"
to desired number of users + 1 (extra account is admin) in docker/docker-compose.yml - Hardware mode
export SGX_MODE=HW
cd $PROJECT_ROOT/docker
docker-compose build enclave
docker-compose run --rm enclave bash
- Simulation mode
export SGX_MODE=SW
cd $PROJECT_ROOT/docker
docker-compose build enclave-sim
docker-compose run --rm enclave-sim bash
- python 3, sgxsdk, and docker required
pip install -r requirements.txt
export SGX_SDK=/opt/sgxsdk #or your local sgx sdk path
export LD_LIBRARY_PATH=$SGX_SDK/sdk_libs:$LD_LIBRARY_PATH
export SGX_MODE=<HW or SW>
cd $PROJECT_ROOT/scripts
./build.sh
./run_BB.sh
export BILLBOARD_URL="http://$(docker inspect -f '{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}' billboard):8545"
- run demo with
get_test_data
outputpython demo.py demo <optional: NUM_USERS>
- run test with
get_test_data
output (checks the expected responses and leaf nodes for correctness)python demo.py test <optional: NUM_USERS> <optional: test case name>
- see vsc/application.py for example with test case names
- run data omission demo with
get_test_data_omission
outputpython demo.py data_omission_demo <optional: NUM_USERS>
cd $PROJECT_ROOT
make clean
- Docker deployment
cd $PROJECT_ROOT/docker
docker-compose down
- Local deployment
cd $PROJECT_ROOT/scripts
./stop_BB.sh
-
setup.sh
: setup application specific files -
scripts/build.sh
copy relevant application files and build the enclave and untrusted host (works for building locally and in docker container) -
scripts/copy.sh
copy relevant application files -
scripts/run_BB.sh
start a truffle/ganache bulletin board instance for local (non docker-compose) deployments -
scripts/stop_BB.sh
stop a truffle/ganache bulletin board instance for local (non docker-compose) deployments -
docker/build.sh
: build docker images- build a simulation/hardware enclave based on
SGX_MODE
environment variable (default is HW)
- build a simulation/hardware enclave based on
-
docker/run.sh
: run docker containers- runs a simulation/hardware enclave based on
SGX_MODE
environment variable (default is HW) docker/run.sh <cmd>
runs with the docker container- ex:
./run.sh bash
opens bash terminal in enclave docker container - ex:
./run.sh "python demo.py test"
runs test in enclave docker container
- ex:
- runs a simulation/hardware enclave based on