Skip to content

Commit

Permalink
Merge remote-tracking branch 'refs/remotes/origin/main' into tudor/mo…
Browse files Browse the repository at this point in the history
…re_cleanup
  • Loading branch information
tudor-malene committed Apr 12, 2024
2 parents 8c2867f + 261253b commit af24ad7
Show file tree
Hide file tree
Showing 71 changed files with 2,770 additions and 1,525 deletions.
2 changes: 1 addition & 1 deletion .github/workflows/manual-deploy-obscuro-gateway.yml
Original file line number Diff line number Diff line change
Expand Up @@ -137,5 +137,5 @@ jobs:
&& docker run -d -p 80:80 -p 81:81 --name ${{ github.event.inputs.testnet_type }}-OG-${{ GITHUB.RUN_NUMBER }} \
-e OBSCURO_GATEWAY_VERSION="${{ GITHUB.RUN_NUMBER }}-${{ GITHUB.SHA }}" \
${{ vars.DOCKER_BUILD_TAG_GATEWAY }} \
./wallet_extension_linux -host=0.0.0.0 -port=80 -portWS=81 -nodeHost=${{ vars.L2_RPC_URL_VALIDATOR }} \
-host=0.0.0.0 -port=8080 -portWS=81 -nodeHost=${{ vars.L2_RPC_URL_VALIDATOR }} \
-logPath=sys_out -dbType=mariaDB -dbConnectionURL="obscurouser:${{ secrets.OBSCURO_GATEWAY_MARIADB_USER_PWD }}@tcp(obscurogateway-mariadb-${{ github.event.inputs.testnet_type }}.uksouth.cloudapp.azure.com:3306)/ogdb"'
12 changes: 12 additions & 0 deletions .github/workflows/manual-deploy-testnet-l2.yml
Original file line number Diff line number Diff line change
Expand Up @@ -112,6 +112,17 @@ jobs:
inlineScript: |
$(az resource list --tag ${{ vars.AZURE_DEPLOY_GROUP_L2 }}=true --query '[]."id"' -o tsv | xargs -n1 az resource delete --verbose -g Testnet --ids) || true
# Delete old database tables from previous deployment
- name: 'Delete host databases'
uses: azure/CLI@v1
with:
inlineScript: |
databases=$(az postgres flexible-server db list --resource-group Testnet --server-name postgres-ten-${{ github.event.inputs.testnet_type }} --query "[?starts_with(name, 'host_')].[name]" -o tsv)
for db in $databases; do
az postgres flexible-server db delete --database-name "$db" --resource-group Testnet --server-name postgres-ten-${{ github.event.inputs.testnet_type }} --yes
done
- name: 'Upload L1 deployer container logs'
uses: actions/upload-artifact@v3
with:
Expand Down Expand Up @@ -249,6 +260,7 @@ jobs:
-max_batch_interval=${{ vars.L2_MAX_BATCH_INTERVAL }} \
-rollup_interval=${{ vars.L2_ROLLUP_INTERVAL }} \
-l1_chain_id=${{ vars.L1_CHAIN_ID }} \
-postgres_db_host=postgres://tenuser:${{ secrets.TEN_POSTGRES_USER_PWD }}@postgres-ten-${{ github.event.inputs.testnet_type }}.postgres.database.azure.com:5432/ \
start'
check-obscuro-is-healthy:
Expand Down
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -209,7 +209,7 @@ root
│ │ ├── <a href="./go/ethadapter/erc20contractlib">erc20contractlib</a>: Understand ERC20 transactions.
│ │ └── <a href="./go/ethadapter/mgmtcontractlib">mgmtcontractlib</a>: Understand Ten Management contrract transactions.
│ ├── <a href="./go/host">host</a>: The standalone host process.
│ │ ├── <a href="./go/host/db">db</a>: The host's database.
│ │ ├── <a href="go/host/storage/db">db</a>: The host's database.
│ │ ├── <a href="./go/host/hostrunner">hostrunner</a>: The entry point.
│ │ ├── <a href="./go/host/main">main</a>: Main
│ │ ├── <a href="./go/host/node">node</a>: The host implementation.
Expand Down
154 changes: 154 additions & 0 deletions design/host/host_db_requirements.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,154 @@
# Moving Host DB to SQL

The current implementation uses the `ethdb.KeyValueStore` which provides fast access but is not sufficient for the
querying capabilities required by Tenscan. We want to move to an SQL implementation similar to what the Enclave uses.

## Current Storage
### Schema Keys
```go
var (
blockHeaderPrefix = []byte("b")
blockNumberHeaderPrefix = []byte("bnh")
batchHeaderPrefix = []byte("ba")
batchHashPrefix = []byte("bh")
batchNumberPrefix = []byte("bn")
batchPrefix = []byte("bp")
batchHashForSeqNoPrefix = []byte("bs")
batchTxHashesPrefix = []byte("bt")
headBatch = []byte("hb")
totalTransactionsKey = []byte("t")
rollupHeaderPrefix = []byte("rh")
rollupHeaderBlockPrefix = []byte("rhb")
tipRollupHash = []byte("tr")
blockHeadedAtTip = []byte("bht")
)
```
Some of the schema keys are dummy keys for entries where we only have one entry that is updated such as totals or tip
data. The rest of the schema keys are used as prefixes appended with the `byte[]` representation of the key.

| Data Type | Description | Schema | Key | Value (Encoded) |
|------------------|---------------------------------|--------|------------------------------|--------------------|
| **Batch** | Batch hash to headers | ba | BatchHeader.Hash() | BatchHeader |
| **Batch** | Batch hash to ExtBatch | bp | ExtBatch.Hash() | ExtBatch |
| **Batch** | Batch hash to TX hashes | bt | ExtBatch.Hash() | ExtBatch.TxHashes |
| **Batch** | Batch number to batch hash | bh | BatchHeader.Number | BatchHeader.Hash() |
| **Batch** | Batch seq no to batch hash | bs | BatchHeader.SequencerOrderNo | BatchHeader.Hash() |
| **Batch** | TX hash to batch number | bn | ExtBatch.TxHashes[i] | BatchHeader.Number |
| **Batch** | Head Batch | hb | "hb" | ExtBatch.Hash() |
| **Block** | L1 Block hash to block header | b | Header.Hash() | Header |
| **Block** | L1 Block height to block header | bnh | Header.Number | Header |
| **Block** | Latest Block | bht | "bht" | Header.Hash() |
| **Rollup** | Rollup hash to header | rh | RollupHeader.Hash() | RollupHeader |
| **Rollup** | L1 Block hash to rollup header | rhb | L1Block.Hash() | RollupHeader |
| **Rollup** | Tip rollup header | tr | "tr" | RollupHeader |
| **Transactions** | Total number of transactions | t | "t" | Int |

## Tenscan Functionality Requirements

### Mainnet Features
#### Currently supported
* Return the list of batches in descending order
* View details within the batch (BatchHeader and ExtBatch)
* Return the number of transactions within the batch
* Return the list of transactions in descending order

### Not currently supported
* Return a list of rollups in descending order
* View details of the rollup (probably needs to be ExtBatch for user )
* Navigate to the L1 block on etherscan from the rollup
* Return the list of batches within the rollup
* Navigate from the transaction to the batch it was included in
* Navigate from the batch to the rollup that it was included in
* TODO Cross chain messaging - Arbiscan shows L1>L2 and L2>L1

### Testnet-Only Features
#### Currently supported
* Copy the encrypted TX blob to a new page and decrypt there

#### Not currently supported
* From the batch you should be able to optionally decrypt the transactions within the batch
* Navigate into the transaction details from the decrypted transaction
* We want to be able to navigate up the chain from TX to batch to rollup

## SQL Schema

There are some considerations here around the behaviour of tenscan for testnet vs mainnet. Because we are able to decrypt
the encrypted blob on testnet we are able to retrieve the number of transactions but on mainnet this wont be possible so
we need to store the TxCount in

### Rollup
```sql
create table if not exists rollup_host
(
id INTEGER PRIMARY KEY AUTOINCREMENT,
hash binary(16) NOT NULL UNIQUE,
start_seq int NOT NULL,
end_seq int NOT NULL,
time_stamp int NOT NULL,
ext_rollup blob NOT NULL,
compression_block binary(32) NOT NULL
);

create index IDX_ROLLUP_HASH_HOST on rollup_host (hash);
create index IDX_ROLLUP_PROOF_HOST on rollup_host (compression_block);
create index IDX_ROLLUP_SEQ_HOST on rollup_host (start_seq, end_seq);
```

Calculating the `L1BlockHeight` as done in `calculateL1HeightsFromDeltas` will be quite computationally expensive so we
can just order them by `end_seq`.

### Batch
Storing the encoded ext batch so that we can provide rich data to the UI including gas, receipt, cross-chain hash etc.
```sql
create table if not exists batch_host
(
sequence int primary key,
full_hash binary(32) NOT NULL,
hash binary(16) NOT NULL unique,
height int NOT NULL,
ext_batch mediumblob NOT NULL
);

create index IDX_BATCH_HEIGHT_HOST on batch_host (height);

```

### Transactions

We need to store these separately for efficient lookup of the batch by tx hash and vice versa.

Because we are able to decrypt the encrypted blob on testnet we are able to retrieve the number of transactions that way
but on mainnet this won't be possible, so we need to store the `tx_count` in this table. There is a plan to remove
`ExtBatch.TxHashes` and expose a new Enclave API to retrieve this.

```sql
create table if not exists transactions_host
(
hash binary(32) primary key,
b_sequence int REFERENCES batch_host
);

create table if not exists transaction_count
(
id int NOT NULL primary key,
total int NOT NULL
);

```

## Database Choice

The obvious choice is MariaDB as this is what is used by the gateway so we would have consistency across the stack. It
would make deployment simpler as the scripts are already there. Main benefits of MariaDB:

* Offer performance improvements through the use of aria storage engine which is not available through MySQL
* Strong security focus with RBAC and data-at-rest encryption
* Supports a large number of concurrent connections

Postgres would be the obvious other alternative but given it is favoured for advanced data types, complex queries and
geospatial capabilities, it doesn't offer us any benefit for this use case over MariaDB.

## Cross Chain Messages

We want to display L2 > L1 and L1 > L2 transaction data. We will expose an API to retrieve these and the implementation
for retrieving the data will either be via subscriptions to the events API or we will store them in the database. TBC
6 changes: 5 additions & 1 deletion dockerfiles/host.Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,11 @@ FROM alpine:3.18
# Copy over just the binary from the previous build stage into this one.
COPY --from=build-host \
/home/obscuro/go-obscuro/go/host/main /home/obscuro/go-obscuro/go/host/main


# Workaround to fix postges filepath issue
COPY --from=build-host \
/home/obscuro/go-obscuro/go/host/storage/init/postgres /home/obscuro/go-obscuro/go/host/storage/init/postgres

WORKDIR /home/obscuro/go-obscuro/go/host/main

# expose the http and the ws ports to the host
Expand Down
1 change: 1 addition & 0 deletions go.mod
Original file line number Diff line number Diff line change
Expand Up @@ -107,6 +107,7 @@ require (
github.com/kr/pretty v0.3.1 // indirect
github.com/kr/text v0.2.0 // indirect
github.com/leodido/go-urn v1.4.0 // indirect
github.com/lib/pq v1.10.9 // indirect
github.com/mattn/go-isatty v0.0.20 // indirect
github.com/mattn/go-runewidth v0.0.15 // indirect
github.com/mmcloughlin/addchain v0.4.0 // indirect
Expand Down
2 changes: 2 additions & 0 deletions go.sum
Original file line number Diff line number Diff line change
Expand Up @@ -302,6 +302,8 @@ github.com/leanovate/gopter v0.2.9 h1:fQjYxZaynp97ozCzfOyOuAGOU4aU/z37zf/tOujFk7
github.com/leanovate/gopter v0.2.9/go.mod h1:U2L/78B+KVFIx2VmW6onHJQzXtFb+p5y3y2Sh+Jxxv8=
github.com/leodido/go-urn v1.4.0 h1:WT9HwE9SGECu3lg4d/dIA+jxlljEa1/ffXKmRjqdmIQ=
github.com/leodido/go-urn v1.4.0/go.mod h1:bvxc+MVxLKB4z00jd1z+Dvzr47oO32F/QSNjSBOlFxI=
github.com/lib/pq v1.10.9 h1:YXG7RB+JIjhP29X+OtkiDnYaXQwpS4JEWq7dtCCRUEw=
github.com/lib/pq v1.10.9/go.mod h1:AlVN5x4E4T544tWzH6hKfbfQvm3HdbOxrmggDNAPY9o=
github.com/magiconair/properties v1.8.0/go.mod h1:PppfXfuXeibc/6YijjN8zIbojt8czPbwD3XqdrwzmxQ=
github.com/mattn/go-colorable v0.1.2/go.mod h1:U0ppj6V5qS13XJ6of8GYAs25YV2eR4EVcfRqFIhoBtE=
github.com/mattn/go-colorable v0.1.8/go.mod h1:u6P/XSegPjTcexA+o6vUJrdnUu04hMope9wVRipJSqc=
Expand Down
4 changes: 2 additions & 2 deletions go/common/batches.go
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ import (
// todo (#718) - expand this structure to contain the required fields.
type ExtBatch struct {
Header *BatchHeader
// todo - remove
// todo - remove and replace with enclave API
TxHashes []TxHash // The hashes of the transactions included in the batch.
EncryptedTxBlob EncryptedTransactions
hash atomic.Value
Expand All @@ -32,7 +32,7 @@ func (b *ExtBatch) Hash() L2BatchHash {
func (b *ExtBatch) Encoded() ([]byte, error) {
return rlp.EncodeToBytes(b)
}

func (b *ExtBatch) SeqNo() *big.Int { return new(big.Int).Set(b.Header.SequencerOrderNo) }
func DecodeExtBatch(encoded []byte) (*ExtBatch, error) {
var batch ExtBatch
if err := rlp.DecodeBytes(encoded, &batch); err != nil {
Expand Down
5 changes: 2 additions & 3 deletions go/common/host/host.go
Original file line number Diff line number Diff line change
Expand Up @@ -4,17 +4,16 @@ import (
"github.com/ethereum/go-ethereum/core/types"
"github.com/ten-protocol/go-ten/go/common"
"github.com/ten-protocol/go-ten/go/config"
"github.com/ten-protocol/go-ten/go/host/db"
"github.com/ten-protocol/go-ten/go/host/storage"
"github.com/ten-protocol/go-ten/go/responses"
"github.com/ten-protocol/go-ten/lib/gethfork/rpc"
)

// Host is the half of the Obscuro node that lives outside the enclave.
type Host interface {
Config() *config.HostConfig
DB() *db.DB
EnclaveClient() common.Enclave

Storage() storage.Storage
// Start initializes the main loop of the host.
Start() error
// SubmitAndBroadcastTx submits an encrypted transaction to the enclave, and broadcasts it to the other hosts on the network.
Expand Down
31 changes: 31 additions & 0 deletions go/common/query_types.go
Original file line number Diff line number Diff line change
Expand Up @@ -25,11 +25,21 @@ type BatchListingResponse struct {
Total uint64
}

type BatchListingResponseDeprecated struct {
BatchesData []PublicBatchDeprecated
Total uint64
}

type BlockListingResponse struct {
BlocksData []PublicBlock
Total uint64
}

type RollupListingResponse struct {
RollupsData []PublicRollup
Total uint64
}

type PublicTransaction struct {
TransactionHash TxHash
BatchHeight *big.Int
Expand All @@ -38,10 +48,31 @@ type PublicTransaction struct {
}

type PublicBatch struct {
SequencerOrderNo *big.Int `json:"sequence"`
Hash []byte `json:"hash"`
FullHash common.Hash `json:"fullHash"`
Height *big.Int `json:"height"`
TxCount *big.Int `json:"txCount"`
Header *BatchHeader `json:"header"`
EncryptedTxBlob EncryptedTransactions `json:"encryptedTxBlob"`
}

// TODO (@will) remove when tenscan UI has been updated
type PublicBatchDeprecated struct {
BatchHeader
TxHashes []TxHash `json:"txHashes"`
}

type PublicRollup struct {
ID *big.Int
Hash []byte
FirstSeq *big.Int
LastSeq *big.Int
Timestamp uint64
Header *RollupHeader
L1Hash []byte
}

type PublicBlock struct {
BlockHeader types.Header `json:"blockHeader"`
RollupHash common.Hash `json:"rollupHash"`
Expand Down
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
package database
package storage

import (
"database/sql"
Expand Down
13 changes: 8 additions & 5 deletions go/config/host_config.go
Original file line number Diff line number Diff line change
Expand Up @@ -75,8 +75,8 @@ type HostInputConfig struct {
// UseInMemoryDB sets whether the host should use in-memory or persistent storage
UseInMemoryDB bool

// LevelDBPath path for the levelDB persistence dir (can be empty if a throwaway file in /tmp/ is acceptable, or if using InMemory DB)
LevelDBPath string
// PostgresDBHost db url for connecting to Postgres host database
PostgresDBHost string

// DebugNamespaceEnabled enables the debug namespace handler in the host rpc server
DebugNamespaceEnabled bool
Expand Down Expand Up @@ -132,7 +132,7 @@ func (p HostInputConfig) ToHostConfig() *HostConfig {
MetricsEnabled: p.MetricsEnabled,
MetricsHTTPPort: p.MetricsHTTPPort,
UseInMemoryDB: p.UseInMemoryDB,
LevelDBPath: p.LevelDBPath,
PostgresDBHost: p.PostgresDBHost,
DebugNamespaceEnabled: p.DebugNamespaceEnabled,
BatchInterval: p.BatchInterval,
MaxBatchInterval: p.MaxBatchInterval,
Expand Down Expand Up @@ -191,8 +191,11 @@ type HostConfig struct {
LogPath string
// Whether the host should use in-memory or persistent storage
UseInMemoryDB bool
// filepath for the levelDB persistence dir (can be empty if a throwaway file in /tmp/ is acceptable, or if using InMemory DB)
LevelDBPath string
// Host address for Postgres DB instance (can be empty if using InMemory DB or if attestation is disabled)
PostgresDBHost string
// filepath for the sqlite DB persistence file (can be empty if a throwaway file in /tmp/ is acceptable or
// if using InMemory DB)
SqliteDBPath string

//////
// NODE NETWORKING
Expand Down
Loading

0 comments on commit af24ad7

Please sign in to comment.