Releases: streamingfast/substreams-sink-sql
v3.0.3
protodefs-v1.0.3
Added support for selecting engine postgres
or clickhouse
to sinkconfig protobuf definition
example:
schema: "./schema.sql"
wire_protocol_access: true
engine: clickhouse
pgweb_frontend
enabled: false
v3.0.2
- Fixed default endpoint sfor networks 'goerli' and 'mumbai'
- Added
--postgraphile
flag tosetup
, which will add a @Skip comment on cursor table so Postgraphile doesn't try to serve cursors (it resulted in a name collision with Postgraphile internal names) - Fixed a bug with Clickhouse driver where different integer sizes need explicit conversion
protodefs-v1.0.2
SPKG with protobuf definition for sf.substreams.sink.sql.v1 "Service":
syntax = "proto3";
package sf.substreams.sink.sql.v1;
option go_package = "github.com/streamingfast/substreams-sink-sql/pb;pbsql";
import "sf/substreams/options.proto";
message Service {
// Containing both create table statements and index creation statements.
string schema = 1 [ (sf.substreams.options).load_from_file = true ];
optional DBTConfig dbt_config = 2;
bool wire_protocol_access = 3;
HasuraFrontend hasura_frontend = 4;
PostgraphileFrontend postgraphile_frontend = 5;
PGWebFrontend pgweb_frontend = 6;
}
message DBTConfig {
bytes files = 1 [ (sf.substreams.options).zip_from_folder = true ];
}
message HasuraFrontend {
bool enabled = 1;
}
message PostgraphileFrontend {
bool enabled = 1;
}
message PGWebFrontend {
bool enabled = 1;
}
v3.0.1
Fixed
- Fixed an issue where the schema encoded in the SinkConfig part of a manifest would not be encoded correctly, leading to garbled (base64) bytes being sent to the SQL server instead of the schema.
v3.0.0
Highlights
This release brings a major refactoring enabling support for multiple database drivers and not just Postgres anymore. Our first newly supported driver is Clickhouse which defines itself as The fastest and most resource efficient open-source database for real-time apps and analytics. In the future, further database driver could be supported like MySQL, MSSQL and any other that can talk the SQL protocol.
Now that we support multiple driver, keeping the substreams-sink-postgres
didn't make sense anymore. As such, we have renamed the project from substreams-sink-postgresql
to substreams-sink-sql
since it now supports Clickhouse out of the box. The binary and Go modules have been renamed in consequence.
Another major change brought by this release is the usage of Substreams "Deployable Unit". What we call a "Deployable Unit" is a Substreams manifest that fully defines a deployment packaged as a single artifact. This change how the sink is operated; the SQL schema, output module and "Network" identifer are now passed in the "SinkConfig" section of the Substreams manifest instead of being accepted at command line.
Read the Operators section below to learn how to migrate to this new version.
Operators
Passing theschema
and the module_name
to the run
and setup
commands is no longer accepted via arguments, they need to be written to the substreams.yaml
file.
Before:
substreams-sink-sql setup <dsn> "path/to/schema.sql"
substreams-sink-sql run <dsn> mainnet.eth.streamingfast.io:443 https://github.com/streamingfast/substreams-eth-block-meta/releases/download/v0.5.1/substreams-eth-block-meta-v0.5.1.spkg db_out [<range>]
Now:
- Create a deployable unit file, let's call it
substreams.prod.yaml
with content:
specVersion: v0.1.0
package:
name: "<name>"
version: v0.0.1
imports:
sql: https://github.com/streamingfast/substreams-sink-sql/releases/download/protodefs-v1.0.1/substreams-sink-sql-protodefs-v1.0.1.spkg
main: https://github.com/streamingfast/substreams-eth-block-meta/releases/download/v0.5.1/substreams-eth-block-meta-v0.5.1.spkg
network: mainnet
sink:
module: main:db_out
type: sf.substreams.sink.sql.v1.Service
config:
schema: "./path/to/schema.sql"
In this <name>
is the same name as what <manifest>
defines was, https://github.com/streamingfast/substreams-eth-block-meta/releases/download/v0.5.1/substreams-eth-block-meta-v0.5.1.spkg
is the current manifest you deploy.
The ./path/to/schema.sql
would point to your schema file (path resolved relative to parent directory of substreams.prod.yaml
).
The 'network: mainnet' will be used to resolve to an endpoint. You can configure each network to have its own endpoint via environment variables SUBSTREAMS_ENDPOINTS_CONFIG_<NETWORK>
or override this mechanism completely by using the --endpoint
flag. Most used networks have default endpoints.
- Setup your database:
substreams-sink-sql setup <dsn> substreams.prod.yaml
- Run the sink:
substreams-sink-sql run <dsn> substreams.prod.yaml [<range>]
Similar changes have been applied to other commands as well.
protodefs-v1.0.1
SPKG with protobuf definition for sf.substreams.sink.sql.v1 "Service":
syntax = "proto3";
package sf.substreams.sink.sql.v1;
option go_package = "github.com/streamingfast/substreams-sink-sql/pb;pbsql";
import "sf/substreams/options.proto";
message Service {
// Containing both create table statements and index creation statements.
string schema = 1 [ (sf.substreams.options).load_from_file = true ];
optional DBTConfig dbt_config = 2;
bool wire_protocol_access = 3;
HasuraConfig hasura_frontend = 4;
PostgraphileConfig postgraphile_frontend = 5;
}
message DBTConfig {
bytes files = 1 [ (sf.substreams.options).zip_from_folder = true ];
}
message HasuraConfig {}
message PostgraphileConfig {}
v2.5.4
Added
-
Added average flush duration to sink stats.
-
Added log line when flush time to database is
> 5s
inINFO
and inWARN
if> 30s
.
Fixed
- Fixed
pprof
HTTP routes not properly registered.
Changed
- Renamed metric Prometheus metric
substreams_sink_postgres_flushed_entries_count
tosubstreams_sink_postgres_flushed_rows_count
, adjust your dashboard if needed and change it found aGauge
to aCounter
.
v2.5.3
-
Refactored internal code to support multiple database drivers.
-
Experimental
clickhouse
is now supported as a newclickhouse
is now supported* Added driver abstractionYou can connect to Clickhouse by using the following DSN:
- Not encrypted:
clickhouse://<host>:9000/<database>?username=<user>&password=<password>
- Encrypted:
clickhouse://<host>:9440/<database>?secure=true&skip_verify=true&username=<user>&password=<password>
If you want to send custom args to the connection, you can use by sending as query params.
- Not encrypted: