diff --git a/.nojekyll b/.nojekyll new file mode 100644 index 0000000..e69de29 diff --git a/404.html b/404.html new file mode 100644 index 0000000..a88b5cc --- /dev/null +++ b/404.html @@ -0,0 +1,516 @@ + + + +
+ + + + + + + + + + + + + + + + +The agent is responsible for the life cycle of the computation, i.e., running the computation and sending events about the status of the computation within the TEE. The agent is found inside the VM (TEE), and each computation within the TEE has its own agent. When a computation run request is sent from the manager, manager creates a VM where the agent is found and sends the computation manifest to the agent.
+The picture below shows where the agent runs in the Cocos system, helping us better understand its role.
+ +idle
: Initial state, waiting for the computation to start.receivingManifest
: Receives the initial computation manifest.receivingAlgorithm
: Receives the algorithm for the computation.receivingData
: Receives dataset data for the computation.running
: Executes the computation using received algorithms and data.resultsReady
: Computation has finished, results are available.complete
: All results have been consumed, computation lifecycle ends.start
: Triggers the computation startup process.manifestReceived
: Indicates computation manifest has been received.algorithmReceived
: Indicates the algorithm has been received.dataReceived
: Indicates all dataset data has been received.runComplete
: Signals the completion of the computation execution.resultsConsumed
: Indicates all consumers have retrieved the results.As the computation in the agent undergoes different operations, it sends events to the manager so that the user can monitor the computation from either the UI or other client. Events sent to the manager are based on the agent state as defined by the statemachine.
+Agent sends agent events and logs to the manager via vsock. The manager listens to the vsock and forwards the events via gRPC. The agent events and logs are used to show the status of the computation inside the TEE so that a user can be aware of what is happening inside the TEE.
+The agent can fetch the attestation report from the host using the AMD SEV guest driver. The attestation report proves that the agent is running inside the secure virtual machine (SVM) and that the SVM is running the expected code on the expected hardware and is configured correctly.
+Before execution, algorithms and datasets are validated against the computation manifest to ensure integrity and compatibility. This includes the sha3 256 hash of the dataset and algorithm, which are validated against the value set in the manifest. The algorithm and dataset provider ID are also validated against the manifest during the uploading of the dataset and algorithm.
+There are four supported algorithm types, binaries, python files, docker images and wasm modules. The default algorithm type is binaries, which is uploaded to agent using CLI. Instructions on how to provide a python file are provided in CLI. More information on how to run the other types of algorithms can be found here.
+ + + + + + + + + + + + + +Currently, cocos supports running the following algorithms:
+Binary algorithms are compiled to run on the enclave. The enclave is a secure environment that runs on the host machine. The enclave is created by the manager and the agent is loaded into the enclave. The agent is responsible for running the computation and communicating with the outside world.
+For Binary algorithms, with datatsets or without datasets the process is similar to all the other algorithms.
+NOTE: Make sure you have terminated the previous computation before starting a new one.
+Download the examples from the AI repository and follow the instructions in the README file to compile one of the examples.
+git clone https://github.com/ultravioletrs/ai
+
+Make sure you have Rust installed. If not, you can install it by following the instructions here.
+cd ai/burn-algorithms
+
+NOTE: Make sure you have rust installed. If not, you can install it by following the instructions here.
+cargo build --release --bin addition --features cocos
+
+This will generate the binary in the target/release
folder. Copy the binary to the cocos
folder.
cp target/release/addition ../../cocos/
+
+Start the computation server:
+go run ./test/computations/main.go ./addition public.pem false
+
+The logs will be similar to this:
+{"time":"2024-08-19T14:09:28.852409931+03:00","level":"INFO","msg":"manager_test_server service gRPC server listening at :7001 without TLS"}
+
+Start the manager
+sudo \
+MANAGER_QEMU_SMP_MAXCPUS=4 \
+MANAGER_GRPC_URL=localhost:7001 \
+MANAGER_LOG_LEVEL=debug \
+MANAGER_QEMU_ENABLE_SEV_SNP=false \
+MANAGER_QEMU_OVMF_CODE_FILE=/usr/share/edk2/x64/OVMF_CODE.fd \
+MANAGER_QEMU_OVMF_VARS_FILE=/usr/share/edk2/x64/OVMF_VARS.fd \
+go run main.go
+
+The logs will be similar to this:
+{"time":"2024-08-19T14:10:00.239331599+03:00","level":"INFO","msg":"-enable-kvm -machine q35 -cpu EPYC -smp 4,maxcpus=4 -m 2048M,slots=5,maxmem=30G -drive if=pflash,format=raw,unit=0,file=/usr/share/edk2/x64/OVMF_CODE.fd,readonly=on -drive if=pflash,format=raw,unit=1,file=/usr/share/edk2/x64/OVMF_VARS.fd -netdev user,id=vmnic,hostfwd=tcp::7020-:7002 -device virtio-net-pci,disable-legacy=on,iommu_platform=true,netdev=vmnic,addr=0x2,romfile= -device vhost-vsock-pci,id=vhost-vsock-pci0,guest-cid=3 -vnc :0 -kernel img/bzImage -append \"earlyprintk=serial console=ttyS0\" -initrd img/rootfs.cpio.gz -nographic -monitor pty"}
+{"time":"2024-08-19T14:10:17.798497671+03:00","level":"INFO","msg":"Method Run for computation took 17.438247421s to complete"}
+{"time":"2024-08-19T14:10:17.800162858+03:00","level":"INFO","msg":"Agent Log/Event, Computation ID: 1, Message: agent_log:{message:\"Transition: receivingManifest -> receivingManifest\\n\" computation_id:\"1\" level:\"DEBUG\" timestamp:{seconds:1724065817 nanos:796771386}}"}
+{"time":"2024-08-19T14:10:17.800336232+03:00","level":"INFO","msg":"Agent Log/Event, Computation ID: 1, Message: agent_log:{message:\"Transition: receivingAlgorithm -> receivingAlgorithm\\n\" computation_id:\"1\" level:\"DEBUG\" timestamp:{seconds:1724065817 nanos:797222579}}"}
+{"time":"2024-08-19T14:10:17.80043386+03:00","level":"INFO","msg":"Agent Log/Event, Computation ID: 1, Message: agent_event:{event_type:\"receivingAlgorithm\" timestamp:{seconds:1724065817 nanos:797263757} computation_id:\"1\" originator:\"agent\" status:\"in-progress\"}"}
+{"time":"2024-08-19T14:10:17.8005587+03:00","level":"INFO","msg":"Agent Log/Event, Computation ID: 1, Message: agent_log:{message:\"agent service gRPC server listening at :7002 without TLS\" computation_id:\"1\" level:\"INFO\" timestamp:{seconds:1724065817 nanos:797467753}}"}
+2024/08/19 14:10:20 traces export: Post "http://localhost:4318/v1/traces": dial tcp [::1]:4318: connect: connection refused
+
+The logs from the computation server will be similar to this:
+{"time":"2024-08-19T14:09:28.852409931+03:00","level":"INFO","msg":"manager_test_server service gRPC server listening at :7001 without TLS"}
+{"time":"2024-08-19T14:10:00.354929002+03:00","level":"DEBUG","msg":"received who am on ip address [::1]:57968"}
+received agent event
+&{event_type:"vm-provision" timestamp:{seconds:1724065800 nanos:360232336} computation_id:"1" originator:"manager" status:"starting"}
+received agent event
+&{event_type:"vm-provision" timestamp:{seconds:1724065800 nanos:360990806} computation_id:"1" originator:"manager" status:"in-progress"}
+received agent log
+&{message:"char device redirected to /dev/pts/9 (label compat_monitor0)\n" computation_id:"1" level:"debug" timestamp:{seconds:1724065800 nanos:403232551}}
+received agent log
+&{message:"\x1b[2J" computation_id:"1" level:"debug" timestamp:{seconds:1724065801 nanos:103436975}}
+received agent event
+&{event_type:"vm-provision" timestamp:{seconds:1724065817 nanos:798465068} computation_id:"1" originator:"manager" status:"complete"}
+received runRes
+&{agent_port:"6050" computation_id:"1"}
+received agent log
+&{message:"Transition: receivingManifest -> receivingManifest\n" computation_id: "1" level:"DEBUG" timestamp:{seconds:1724065817 nanos:796771386}}
+received agent log
+&{message:"Transition: receivingAlgorithm -> receivingAlgorithm\n" computation_id:"1" level:"DEBUG" timestamp:{seconds:1724065817 nanos:797222579}}
+received agent event
+&{event_type:"receivingAlgorithm" timestamp:{seconds:1724065817 nanos:797263757} computation_id:"1" originator:"agent" status:"in-progress"}
+received agent log
+&{message:"agent service gRPC server listening at :7002 without TLS" computation_id:"1" level:"INFO" timestamp:{seconds:1724065817 nanos:797467753}}
+
+Export the agent grpc url
+export AGENT_GRPC_URL=localhost:6050
+
+Upload the algorithm
+./build/cocos-cli algo ./addition ./private.pem
+
+The logs will be similar to this:
+2024/08/19 14:14:10 Uploading algorithm binary: ./addition
+Uploading algorithm... 100% [===============================================>]
+2024/08/19 14:14:10 Successfully uploaded algorithm
+
+Since the algorithm is a binary, we don't need to upload the requirements file. Also, this is the addition example so we don't need to upload the dataset.
+Finally, download the results
+./build/cocos-cli result ./private.pem
+
+The logs will be similar to this:
+2024/08/19 14:14:31 Retrieving computation result file
+2024/08/19 14:14:31 Computation result retrieved and saved successfully!
+
+Unzip the results
+unzip result.zip -d results
+
+cat results/results.txt
+
+The output will be similar to this:
+"[5.141593, 4.0, 5.0, 8.141593]"
+
+Terminal recording session
+ +For real-world examples to test with cocos, see our AI repository.
+NOTE: Make sure you have terminated the previous computation before starting a new one.
+Make sure you have Rust installed. If not, you can install it by following the instructions here.
+cd ai/burn-algorithms
+
+NOTE: Make sure you have rust installed. If not, you can install it by following the instructions here.
+cargo build --release --bin iris --features cocos
+
+This will generate the binary in the target/release
folder. Copy the binary to the cocos
folder.
cp target/release/iris ../../cocos/
+
+Copy the dataset to the cocos
folder.
cp iris/data/Iris.csv ../../cocos
+
+Start the computation server:
+go run ./test/computations/main.go ./iris public.pem false ./Iris.csv
+
+The logs will be similar to this:
+{"time":"2024-08-19T14:26:11.446590856+03:00","level":"INFO","msg":"manager_test_server service gRPC server listening at :7001 without TLS"}
+
+Start the manager
+sudo \
+MANAGER_QEMU_SMP_MAXCPUS=4 \
+MANAGER_GRPC_URL=localhost:7001 \
+MANAGER_LOG_LEVEL=debug \
+MANAGER_QEMU_ENABLE_SEV_SNP=false \
+MANAGER_QEMU_OVMF_CODE_FILE=/usr/share/edk2/x64/OVMF_CODE.fd \
+MANAGER_QEMU_OVMF_VARS_FILE=/usr/share/edk2/x64/OVMF_VARS.fd \
+go run main.go
+
+The logs will be similar to this:
+{"time":"2024-08-19T14:26:20.869571321+03:00","level":"INFO","msg":"-enable-kvm -machine q35 -cpu EPYC -smp 4,maxcpus=4 -m 2048M,slots=5,maxmem=30G -drive if=pflash,format=raw,unit=0,file=/usr/share/edk2/x64/OVMF_CODE.fd,readonly=on -drive if=pflash,format=raw,unit=1,file=/usr/share/edk2/x64/OVMF_VARS.fd -netdev user,id=vmnic,hostfwd=tcp::7020-:7002 -device virtio-net-pci,disable-legacy=on,iommu_platform=true,netdev=vmnic,addr=0x2,romfile= -device vhost-vsock-pci,id=vhost-vsock-pci0,guest-cid=3 -vnc :0 -kernel img/bzImage -append \"earlyprintk=serial console=ttyS0\" -initrd img/rootfs.cpio.gz -nographic -monitor pty"}
+{"time":"2024-08-19T14:26:39.096019489+03:00","level":"INFO","msg":"Method Run for computation took 18.206099301s to complete"}
+{"time":"2024-08-19T14:26:39.097590785+03:00","level":"INFO","msg":"Agent Log/Event, Computation ID: 1, Message: agent_log:{message:\"Transition: receivingManifest -> receivingManifest\\n\" computation_id:\"1\" level:\"DEBUG\" timestamp:{seconds:1724066799 nanos:94341079}}"}
+{"time":"2024-08-19T14:26:39.097907318+03:00","level":"INFO","msg":"Agent Log/Event, Computation ID: 1, Message: agent_event:{event_type:\"receivingAlgorithm\" timestamp:{seconds:1724066799 nanos:94599012} computation_id:\"1\" originator:\"agent\" status:\"in-progress\"}"}
+{"time":"2024-08-19T14:26:39.097933878+03:00","level":"INFO","msg":"Agent Log/Event, Computation ID: 1, Message: agent_log:{message:\"agent service gRPC server listening at :7002 without TLS\" computation_id:\"1\" level:\"INFO\" timestamp:{seconds:1724066799 nanos:94831037}}"}
+2024/08/19 14:26:40 traces export: Post "http://localhost:4318/v1/traces": dial tcp [::1]:4318: connect: connection refused
+
+The logs from the computation server will be similar to this:
+{"time":"2024-08-19T14:26:11.446590856+03:00","level":"INFO","msg":"manager_test_server service gRPC server listening at :7001 without TLS"}
+{"time":"2024-08-19T14:26:20.871605244+03:00","level":"DEBUG","msg":"received who am on ip address [::1]:47994"}
+received agent event
+&{event_type:"vm-provision" timestamp:{seconds:1724066780 nanos:889897585} computation_id:"1" originator:"manager" status:"starting"}
+received agent event
+&{event_type:"vm-provision" timestamp:{seconds:1724066780 nanos:891441319} computation_id:"1" originator:"manager" status:"in-progress"}
+received agent log
+&{message:"char device redirected to /dev/pts/8 (label compat_monitor0)\n" computation_id:"1" level:"debug" timestamp:{seconds:1724066780 nanos:935158505}}
+received agent log
+&{message:"\x1b[2" computation_id:"1" level:"debug" timestamp:{seconds:1724066781 nanos:414565103}}
+received agent event
+&{event_type:"vm-provision" timestamp:{seconds:1724066799 nanos:95970587} computation_id:"1" originator:"manager" status:"complete"}
+received runRes
+&{agent_port:"6014" computation_id:"1"}
+received agent log
+&{message:"Transition: receivingManifest -> receivingManifest\n" computation_id:"1" level:"DEBUG" timestamp:{seconds:1724066799 nanos:94341079}}
+received agent event
+&{event_type:"receivingAlgorithm" timestamp:{seconds:1724066799 nanos:94599012} computation_id:"1" originator:"agent" status:"in-progress"}
+received agent log
+&{message:"agent service gRPC server listening at :7002 without TLS" computation_id:"1" level:"INFO" timestamp:{seconds:1724066799 nanos:94831037}}
+
+Export the agent grpc url
+export AGENT_GRPC_URL=localhost:6014
+
+Upload the algorithm
+./build/cocos-cli algo ./iris ./private.pem
+
+The logs will be similar to this:
+2024/08/19 14:29:58 Uploading algorithm binary: ./iris
+Uploading algorithm... 100% [===============================================>]
+2024/08/19 14:29:58 Successfully uploaded algorithm
+
+Upload the dataset
+./build/cocos-cli data ./Iris.csv ./private.pem
+
+2024/08/19 14:30:55 Uploading dataset CSV: ./Iris.csv
+Uploading data... 100% [====================================================>]
+2024/08/19 14:30:55 Successfully uploaded dataset
+
+Finally, download the results
+./build/cocos-cli result ./private.pem
+
+The logs will be similar to this:
+2024/08/19 14:31:46 Retrieving computation result file
+2024/08/19 14:31:46 Computation result retrieved and saved successfully!
+
+Unzip the results
+unzip result.zip -d results
+
+Build the iris example from the AI repository
+cd ../ai/burn-algorithms/
+
+If you haven't already, create the artifacts folder
+mkdir -p artifacts/iris
+
+Copy the results to the artifacts folder
+cp -r ../../cocos/results/* artifacts/iris/
+
+Build the iris-inference example
+cargo build --release --bin iris-inference
+
+Test the iris-inference example
+./target/release/iris-inference '{"sepal_length": 7.0, "sepal_width": 3.2, "petal_length": 4.7, "petal_width": 1.4}'
+
+The output will be similar to this:
+Iris-versicolor
+
+Terminal recording session
+ +For real-world examples to test with cocos, see our AI repository.
+Python is a high-level, interpreted programming language. Python scripts can be run on the enclave. Python is known for its simplicity and readability, making it a popular choice for beginners and experienced developers alike.
+This has been covered in the previous section.
+For Python algorithms, with datatsets:
+NOTE: Make sure you have terminated the previous computation before starting a new one.
+Start the computation server:
+go run ./test/computations/main.go ./test/manual/algo/lin_reg.py public.pem false ./test/manual/data/iris.csv
+
+Start the manager
+sudo \
+MANAGER_QEMU_SMP_MAXCPUS=4 \
+MANAGER_GRPC_URL=localhost:7001 \
+MANAGER_LOG_LEVEL=debug \
+MANAGER_QEMU_ENABLE_SEV_SNP=false \
+MANAGER_QEMU_OVMF_CODE_FILE=/usr/share/edk2/x64/OVMF_CODE.fd \
+MANAGER_QEMU_OVMF_VARS_FILE=/usr/share/edk2/x64/OVMF_VARS.fd \
+go run main.go
+
+Export the agent grpc url from computation server logs
+export AGENT_GRPC_URL=localhost:6066
+
+Upload the algorithm
+./build/cocos-cli algo ./test/manual/algo/lin_reg.py ./private.pem -a python -r ./test/manual/algo/requirements.txt
+
+We pass the requirements file to the algorithm since it has dependencies.
+Upload the dataset
+./build/cocos-cli data ./test/manual/data/iris.csv ./private.pem
+
+Watch the agent logs until the computation is complete. The computation will take a while to complete since it will download the dependencies and run the algorithm.
+&{event_type:"algorithm-run" timestamp:{seconds:1723411516 nanos:935138750} computation_id:"1" originator:"agent" status:"error"}
+received agent event
+&{event_type:"resultsReady" timestamp:{seconds:1723411517 nanos:882446542} computation_id:"1" originator:"agent" status:"in-progress"}
+received agent log
+&{message:"Transition: resultsReady -> resultsReady\n" computation_id:"1" level:"DEBUG" timestamp:{seconds:1723411517 nanos:882432675}}
+
+Finally, download the results
+./build/cocos-cli result ./private.pem
+
+Unzip the results
+unzip result.zip -d results
+
+To read the results make sure you have installed the required dependencies from the requirements file. This should be done inside a virtual environment.
+python3 -m venv venv
+source venv/bin/activate
+pip install -r test/manual/algo/requirements.txt
+
+python3 test/manual/algo/lin_reg.py predict results/model.bin test/manual/data/
+
+The output will be similar to this:
+Precision, Recall, Confusion matrix, in training
+
+ precision recall f1-score support
+
+ Iris-setosa 1.000 1.000 1.000 21
+Iris-versicolor 0.923 0.889 0.906 27
+ Iris-virginica 0.893 0.926 0.909 27
+
+ accuracy 0.933 75
+ macro avg 0.939 0.938 0.938 75
+ weighted avg 0.934 0.933 0.933 75
+
+[[21 0 0]
+ [ 0 24 3]
+ [ 0 2 25]]
+Precision, Recall, and Confusion matrix, in testing
+
+ precision recall f1-score support
+
+ Iris-setosa 1.000 1.000 1.000 29
+Iris-versicolor 1.000 1.000 1.000 23
+ Iris-virginica 1.000 1.000 1.000 23
+
+ accuracy 1.000 75
+ macro avg 1.000 1.000 1.000 75
+ weighted avg 1.000 1.000 1.000 75
+
+[[29 0 0]
+ [ 0 23 0]
+ [ 0 0 23]]
+
+Terminal recording session
+ +For real-world examples to test with cocos, see our AI repository.
+Docker is a platform designed to build, share, and run containerized applications. A container packages the application code, runtime, system tools, libraries, and all necessary settings into a single unit. This ensures the container can be reliably transferred between different computing environments and be executed as expected.
+The Docker images that the Agent will run inside the SVM must have some restrictions. The image must have a /cocos
directory containing the datasets
and results
directories. The Agent will run this image inside the SVM and mount the datasets
and results
onto the /cocos/datasets
and /cocos/results
directories inside the image. The docker image must also contain the command that will be run when the docker container is run.
We will use the linear regression example from the cocos repository in this example.
+git clone https://github.com/ultravioletrs/cocos.git
+
+Change directory to the linear regression example.
+cd cocos/test/manual/algo/
+
+Next, run the build command and save the docker image as a tar
file. This example Dockerfile is based of the python linear regression example using iris dataset.
docker build -t linreg .
+docker save linreg > linreg.tar
+
+Change the current working directory to cocos
.
cd ./cocos
+
+Start the computation server:
+go run ./test/computations/main.go ./test/manual/algo/linreg.tar public.pem false ./test/manual/data/iris.csv
+
+Start the manager
+cd cmd/manager
+
+sudo \
+MANAGER_QEMU_SMP_MAXCPUS=4 \
+MANAGER_QEMU_MEMORY_SIZE=25G \
+MANAGER_GRPC_URL=localhost:7001 \
+MANAGER_LOG_LEVEL=debug \
+MANAGER_QEMU_ENABLE_SEV_SNP=false \
+MANAGER_QEMU_OVMF_CODE_FILE=/usr/share/edk2/x64/OVMF_CODE.fd \
+MANAGER_QEMU_OVMF_VARS_FILE=/usr/share/edk2/x64/OVMF_VARS.fd \
+go run main.go
+
+Export the agent grpc url from computation server logs
+export AGENT_GRPC_URL=localhost:6100
+
+Upload the algorithm
+./build/cocos-cli algo ./test/manual/algo/linreg.tar ./private.pem -a docker
+
+Upload the dataset
+./build/cocos-cli data ./test/manual/data/iris.csv ./private.pem
+
+After some time when the results are ready, you can run the following command to get the results:
+./build/cocos-cli results ./private.pem
+
+The logs will be similar to this:
+2024/08/19 14:14:31 Retrieving computation result file
+2024/08/19 14:14:31 Computation result retrieved and saved successfully!
+
+Unzip the results
+unzip results.zip -d results
+
+To make inference on the results, you can use the following command:
+python3 test/manual/algo/lin_reg.py predict results/model.bin test/manual/data/
+
+Terminal recording session
+ +WebAssembly (wasm) is a binary instruction format for a stack-based virtual machine. Wasm is designed as a portable target for compilation of high-level languages like C/C++/Rust, enabling deployment on the web for client and server applications. Wasm modules can be run on the enclave.
+NOTE: Make sure you have terminated the previous computation before starting a new one.
+Download the examples from the AI repository and follow the instructions in the README file to compile one of the examples.
+git clone https://github.com/ultravioletrs/ai
+
+Make sure you have Rust installed. If not, you can install it by following the instructions here.
+cd ai/burn-algorithms/addition-inference
+
+cargo build --release --target wasm32-wasip1 --features cocos
+
+This will generate the wasm module in the ../target/wasm32-wasip1/release
folder. Copy the module to the cocos
folder.
cp ../target/wasm32-wasip1/release/addition-inference.wasm ../../../cocos
+
+Start the computation server:
+go run ./test/computations/main.go ./addition-inference.wasm public.pem true
+
+Start the manager
+sudo \
+MANAGER_QEMU_SMP_MAXCPUS=4 \
+MANAGER_GRPC_URL=localhost:7001 \
+MANAGER_LOG_LEVEL=debug \
+MANAGER_QEMU_ENABLE_SEV_SNP=false \
+MANAGER_QEMU_OVMF_CODE_FILE=/usr/share/edk2/x64/OVMF_CODE.fd \
+MANAGER_QEMU_OVMF_VARS_FILE=/usr/share/edk2/x64/OVMF_VARS.fd \
+go run main.go
+
+Export the agent grpc url from computation server logs
+export AGENT_GRPC_URL=localhost:6013
+
+Upload the algorithm
+./build/cocos-cli algo ./addition-inference.wasm ./private.pem -a wasm
+
+Since the algorithm is a wasm module, we don't need to upload the requirements file. Also, this is the addition example so we don't need to upload the dataset.
+Finally, download the results
+./build/cocos-cli result ./private.pem
+
+Unzip the results
+unzip result.zip -d results
+
+cat results/results/results.txt
+
+The output will be similar to this:
+"[5.141593, 4.0, 5.0, 8.141593]"
+
+Terminal recording session
+ +For real-world examples to test with cocos, see our AI repository.
+ + + + + + + + + + + + + +CocosAI system is running on the host, and it's main goal is to enable:
+These features are implemented by several independent components of CocosAI system:
+++N.B. CocosAI open-source project does not provide Computation Management service. It is a cloud component, used to define a Computation (i.e. define computation metadata, like participants list, algorithm and data providers, result recipients, etc...). Ultraviolet provides commercial product Prism, a multi-party computation platform, that implements multi-tenant and scalable Computation Management service, running in the cloud or on premise, and capable to connect and control CocosAI system running on the TEE host.
+
Manager is a gRPC client that listens to requests sent through gRPC and sends them to Agent via vsock. Manager creates a secure enclave and loads the computation where the agent resides. The connection between Manager and Agent is through vsock, through which channel agent sends events periodically to manager, who forwards these via gRPC.
+For more information on Manager, please refer to Manager docs.
+Agent defines firmware which goes into the TEE and is used to control and monitor computation within TEE and enable secure and encrypted communication with the outside world (in order to fetch the data and provide the result of the computation). The Agent contains a gRPC server that listens for requests from gRPC clients. Communication between the Manager and Agent is done via vsock. The Agent sends events to the Manager via vsock, which then forwards these via gRPC. Agent contains a gRPC server that exposes useful functions that can be accessed by other gRPC clients such as the CLI.
+For more information on Agent, please refer to Agent docs.
+EOS, or Enclave Operating System, is custom lightweight linux distribution built on buildroot linux. It contains agent and other packages required to run workloads in the TEE.
+CoCoS CLI is used to access the agent within the secure enclave. CLI communicates to agent using gRPC, with functions such as algo to provide the algorithm to be run, data to provide the data to be used in the computation, and run to start the computation. It also has functions to fetch and validate the attestation report of the enclave.
+For more information on CLI, please refer to CLI docs.
+ + + + + + + + + + + + + +{"use strict";/*!
+ * escape-html
+ * Copyright(c) 2012-2013 TJ Holowaychuk
+ * Copyright(c) 2015 Andreas Lubbe
+ * Copyright(c) 2015 Tiancheng "Timothy" Gu
+ * MIT Licensed
+ */var Va=/["'&<>]/;qn.exports=za;function za(e){var t=""+e,r=Va.exec(t);if(!r)return t;var o,n="",i=0,a=0;for(i=r.index;i