Skip to content

Commit

Permalink
dcrpg (DB v2) - PostgreSQL backend and Lite mode (#209)
Browse files Browse the repository at this point in the history
* Start packages dcrpg and dbtypes, and app cmd/rebuilddb2.

move dcrsqlite to db subdir

Update dcrsqlite imports.

Create package dbtypes with more thorough data types than dcrdatapi types.
Create JSONB type with driver.Valuer and sql.Scanner implemeted for PostgreSQL support.
Define tables in dcrpg.
Create dcrpg.Connect() and dcrpg.CreateTables().
Start cmd/rebuilddb2.

Add --droptables flag, existence check, version in table comments.

Improve CreateTables().

block/tx/vin/vout struct extraction demo in rebuilddb2

keep converging on db table defs

Add pg inserts for vout and tx, store PKs in parents.

Add queries.go, statements.go and stmtinternal.go in dcrpg.

dcrpg: Move table statements test into internal.

Move Connect into own file.
Move some make*InsertStatement functions into internal.

dcrpg: full scan.  block_chain table for tracking prev/next block hashes.

rebuilddb2 does a full chain scan.  But set synchronous_commit to off in postgresql.conf!!!!
Add InsertVouts to add multiple vouts in one db tx.
Add InsertBlockPrevNext and UpdateBlockNext for the block_chain table.
Move processTransactions to extraction.go.

Periodically log tx and vout processing rate.

Make -D a short arg to --droptables.
Fix millisecond logging.

Add PrevOut to VinTxProperty, UPSERT stuff, and update logging.

Start drafting RetrieveSpendingTxs and RetrieveSpendingTx, to get a tx given and outpoint and funding tx, respectively.
Reorganize scan in rebuilddb2.
Add unique_hashes uniqueness constraint for transactions table.

rebuilddb2/dbtypes/dcrpg: Working retrieve-spending, indexing, vin table, etc.

Add vin table and InsertVin[s] functions in dcrpg.
Fix (*VinTxPropertyARRAY).Scan that had to work with []interface{} and map[string]interace{} right after json.Unmarshal.
Add Index/Deindex functions in dcrpg.
Add RetrieveSpending/Funding functions and pg statements in dcrpg.
Make pg inset statements that optionally ckeck for conflicts instead of error, and optionally upsert.
Add Tree to VinTxProperty, and tree to pg transactions table.
Add tx_hash and tx_index to vin table (and VinTxProperty).
Add IsStakeTx to txhelpers.

support UNIX domain socket connection to pg

Change vout structure, indexes for vout table.

dcrpg: vout_t insert into transactions table.

Insert vout array usign makeARRAYOfVouts.
create and check existence of type statements, called in rebuilddb2.
transaction in blocks selection queries.
vout value retrieval queries.
Configure speedReport function to run after db inserts or on bail out using sync.Once.
Be sure startHeight is 0 on empty tables, not 1.

Add ResumeInitSync config option for disabling conflict checks regardless of db height.

Also play with concurrent stake and regular transaction/vin/vout db insertion.

Fix Vout values queries, make ResumeInitSync drop/create indexes.

Old (wrong) RetrieveVoutValues is mystery, but the new one is good.
voutindex+1 is the index in sql.
Remove ON CONFLICT check from insertVoutRowChecked sicne there does not seem to be a useful UNIQUE INDEX.
typos

Add UNIX domain socket example to sample-rebuilddb2.conf

Add README.md for dcrpg.

Add dbtypes.MsgBlockToDBBlock into new conversion.go to reuse this code.

Bump version to 0.9.0.

fix password psql login

* ChainDB wrapping sql.DB, dcrpg package.

Collect/Store MsgBlock too, make ChainDB to wrap the sql.DB.  INCOMPLETE!

Add several functions to ChainDB for getting data.

TODO: Cache DB PKs in ChainDB somewhere/somehow when we get them while getting the actual data.
Use the new ChainDB functions in rebuilddb2.
Also store Vouts in dbtypes.Tx.Vouts inside storeTxns instead of copying over later.  TODO: remove dbTxVouts from ExtractBlockTransactions?
Remove extra newlines in logging.

Add dcrpg backend to dcrdata, change mainCore to return error, not int.

Add PostgreSQL tuning reference.

Move testing stuff from rebuilddb2 to a _test file.

Hack together a simultaneous sync of PG adn sqlite DBs.

Make async goroutines wrapping the regular sync functions, but sending the results (height and error) on a channel.
Add a height return to dcrsqlite's sync function.

Init PSQL logger, rename DSQL->SQLT.

More logging in DB resync loop in main.
Fix two incorrect returned heights from (*ChainDB).SyncChainDB.

Return tx index in block from RetrieveTxByHash, and tx indexes from RetrieveTxsByBlockHash.

Order in the returned tx slice is not in the order of vout index, so you need this information.
Build tag pgblockchain_test.go so it doesn't automatically run.  It runs with sync'd mainnet db, so not an faast test from scratch.

* Remove lots of pointless RPC calls.

Add TicketTxnsInBlock.

Remove left over glide files

* dcrd deps update for db2

Update original rebuilddb for height output of SyncDBWithPoolValue.

* add postgresql config to sample dcrdata.conf

* Accelerate vin/vout dbtx, use PG array access proper, vout retrieval only from vout table not transactions.

Use prepared statements now that sprintf injection is no longer needed for array access, accelerating vin/vout insertion by ~1.5x.
Add batch Tx insertion, when combined with above gives total of ~2x speed up.
RetrieveVoutValue, etc. are now FROM vouts instead of trying to use the crazy JSONB or ROW types that were under testing.  Update VoutValue and VoutValues accordingly.
Stop storing the actual vin/vout data in the transactions table, just the PKs for them in their own tables.
Use tx tree for transaction queries, include in index to make unique.
Fix dcrdata panic on non-existing cli flag.
UInt64Array in new arrays.go is based on Int64Array in lib/pq's array.go.
Fix tests in pgblockchain_test.go (run with "go test -tags mainnettest -v" after syncing with mainnet).

* Add spent link to Transactions page, calling SpendingTransaction.

Give dcrdata the start of a secondary data source.
Call SpendingTransaction from txPage to get the information about spending transactions for each outpoint.
Retrieve vin and vout tx inds when looking up spent tx.

Ignore bogus testnet2 organiation address.

* Add "Lite" mode to use only sqlite.

Rename PGHostPort -> PGHost to be less confusing when using UNIX sockets.
Update sample-dcrdata.conf with "lite" flag and renamed pghost flag.
self nil pointer check on ChainDB Store and SyncChainDBAsync methods to prevent panics in lite mode.  In lite mode, SyncChainDBAsync returns -1 height and appropriate message.
explorer has liteMode bool field, set in constructor by checking if explorerSource is a nil interface or has a nil pointer value.
The first page with different output depending on lite mode is txPage, for the spending tx links.

* dcrpg: added address table, tree columns added to vins.

- Add AddressRow struct, with address, funding tx details, vout PK, value, spending tx details, and vin PK.
- Add create and drop index statements and functions.
- Add address statements (addrstmts.go) and query functions.
- Block processing now builds slice of AddressRow structs:
1. Use InsertVouts to create an AddressRow slice with just the spending tx details set.
2. Store it.
3. Query pg for funding details (prevouts) for each of the new Vins.
4. Lookup the address rows for these prevouts.
5. Set the spending tx details (txns and vins processed in current block).

- Add tree to output of SpendingTransaction.
- Add tx_tree and prev_tx_tree columns to vins table.
- Organize vinoutstmts.go content.

Add TxTree field to VinTxProperty, skip lookup of coinbase in addresses table.

Remove stray comma in CreateVinTable.

Flag to not update spending info in addresses table (no update by default).

Still hard coded to update when running normally.
Remove UNIQUE from some indexes that cannot be so, for now.

forgot funding tx vout index in addresses table.

Combine address and vins table queries for efficiency.

close statement and rollback on early return

* Add README.md for rebuilddb2

* Update README.md for db2 and v0.9.0

* Docs, cleanup, delint, compress/align some structs.

* Give http a few seconds to bind, waiting for an error.

Previously we just fired it off and didn't automatically quit dcrdata if the web server failed to bind.

* dcrpg - Address info. Add fields to DB transactions table.

- Add to explorerDataSourceAlt: AddressHistory and FillAddressTransactions to get a complete explorer.AddressInfo from a slice of *dbtypes.AddressRow.
- Update addressPage to use these functions when not in lite mode.
- Add BlockTime, Time, TxType, Size, Spent, Sent, and Fees to dbtypes.Tx and to the transactions table in dcrpg.
- Add cached height to ChainDB, updated when block stored.
- Add AddressHistory to ChainDB to query DB for []*dbtypes.AddressRows for a given address.
- Add FillAddressTransactions to fill out the fields of a passed *explorer.AddressTx by querying the transactions table for the needed info on each transaction.
- Add queries to retrieve full transaction by hash.
- Add queriy to retrieve all information for an address.
- Compute spent, sent, fees, for each dbtypes.Tx in processTransactions.
- Add ReduceAddressHistory to explorertypes to create an initialized but incomplete explorer.AddressInfo from a []*dbtypes.AddressRow.

Add txhelpers.VoteVersion to get a vote version from a pkScript []byte.

Set Confirmations and TotalUnconfirmed in FillAddressTransactions.

Document ReduceAddressHistory.

Add is_valid to blocks, block_height to transactions.

BlockHeight also added to dbtypes.Tx.
Fix Confirmations for address page, using BlockHeight instead of index.
Reorder transactions table columns.
Add spending tx info in RetrieveAddressTxns (fix), complete ReduceAddressHistory for spending.
Fix retrieve tx functions using dbtypes.UInt64Array.

* Add LIMIT and OFFSET suppport for address query.

Support or setting the limit and offset (count and start) from the URL is added using the "n" and "start" URL query parameters on the /address/{address} page.
Simplify (*wiredDB).GetExplorerAddress.
Unexport internal explorer param AddressRows -> maxAddressRows, and min/maxExplorerRows.
Add defaultAddressRows const.
Docs.

* a little extra space for the address on firefox

* Change --resumesync to --reindex in rebuilddb2.

* When adding block, set validity of previous block based on votes.

Add UpdateLastBlock query to set is_valid when needed.
Add Valid to explorer.BlockBasic, but it is currently always set to true.

* Smart sync mode with dcrdata.

Depending on how far the database is from the node's best block, the indexes may be dropped prior to sync, and the address table updates may be skipped in favor of a full rebuild after block/tx sync.

* Query and cache for total address row count.

The address page uses a LIMIT query behind the scenes, so for the address page to say how many are in total, a second query is used to count the rows.  This is only necessary occasionally, so this adds the addressCounter type to store these counts.

New blocks invalidate (clear) the address counts and the queries must be executed again for fresh values.

explorer.AddressHistory now also returns the total number of funding/receiving transactions (total rows from the PSQL table).
Add KnownFundingTxns to explorer.AddressInfo and store the count from AddressHistory in it.

The dcrsqlite db (wiredDB) cannot get this count, so it is always 0 in the address page data structure when in lite mode.

Add NumFundingTxns and NumSpendingTxns to the explorer.AddressInfo type and set these values in both sqlite and pg dbs.
Remove AddressRow field.

* Handle starting from genesis on dcrdata

* Switch over to slow and safe sync after indexes are built. (bug fix)

* Update README with rebuilddb2 info and PostgreSQL notes.

* typo in address page template

* Remove space before single digit day number.

* Total address balance info with dcrpg.

New type explorer.AddressBalance.
Use AddressBalance in addressCounter map.
AddressHistory returns AddressBalance instead of just a tx count.
Add queries to get address row count and value sums for spent and unspent outputs.
Add Limit and Offset to explorer.AddressInfo since this type is used with limit/offset queries.
Show AddressBalance data on address page above table with limit.
Set ChainDB.bestBlock in constructor.

* bump to 1.0.0-pre

* Faster SelectAddressLimitNByAddress statement using subquery that allows use of address index.

* Fix nil pointer deref on address page for large start=X query.

* stakedb windows: do not wipe db if already opened.

* the other nil pointer deref

* Use different address query for dev subsidy since it is extremely common and a different query is faster.
  • Loading branch information
chappjc authored Nov 3, 2017
1 parent c9f6d1d commit e0c6ab4
Show file tree
Hide file tree
Showing 54 changed files with 4,806 additions and 244 deletions.
14 changes: 13 additions & 1 deletion Gopkg.lock

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

102 changes: 78 additions & 24 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,36 +13,57 @@ The dcrdata repository is a collection of golang packages and apps for [Decred](
../dcrdata The dcrdata daemon.
├── blockdata Package blockdata.
├── cmd
│   ├── rebuilddb rebuilddb utility.
│   ├── rebuilddb rebuilddb utility, for SQLite backend.
│   ├── rebuilddb2 rebuilddb2 utility, for PostgreSQL backend.
│   └── scanblocks scanblocks utility.
├── dcrdataapi Package dcrdataapi for golang API clients.
├── dcrsqlite Package dcrsqlite providing SQLite backend.
├── public Public resources for web UI (css, js, etc.).
├── db
│   ├── dbtypes Package dbtypes with common data types.
│   ├── dcrpg Package dcrpg providing PostgreSQL backend.
│   └── dcrsqlite Package dcrsqlite providing SQLite backend.
├── public Public resources for block explorer (css, js, etc.).
├── explorer Package explorer, powering the block explorer.
├── mempool Package mempool.
├── rpcutils Package rpcutils.
├── semver Package semver.
├── stakedb Package stakedb, for tracking tickets.
├── txhelpers Package txhelpers.
└── views HTML templates for web UI.
└── views HTML templates for block explorer.
```

## dcrdata daemon

The root of the repository is the `main` package for the dcrdata app, which has
several components including:

1. Block explorer (web interface).
1. Blockchain monitoring and data collection.
1. Mempool monitoring and reporting.
1. Data storage in durable database (sqlite presently).
1. RESTful JSON API over HTTP(S).
1. Basic web interface.

### Block Explorer

After dcrdata syncs with the blockchain server via RPC, by default it will begin
listening for HTTP connections on `http://127.0.0.1:7777/`. This means it starts
a web server listening on IPv4 localhost, port 7777. Both the interface and port
are configurable. The block explorer and the JSON API are both provided by the
server on this port. See [JSON REST API](#json-rest-api) for details.

Note that while dcrdata can be started with HTTPS support, it is recommended to
employ a reverse proxy such as nginx. See sample-nginx.conf for an example nginx
configuration.

A new database backend using PostgreSQL was introduced in v0.9.0 that provides
expanded functionality. However, initial population of the database takes
additional time and tens of gigabytes of disk storage space. To disable the
PostgreSQL backend (and the expanded functionality), dcrdata may be started with
the `--lite` (`-l` for short) command line flag.

### JSON REST API

The API serves JSON data over HTTP(S). After dcrdata syncs with the blockchain
server, by default it will begin listening on `http://0.0.0.0:7777/`. This
means it starts a web server listening on all network interfaces on port 7777.
**All API endpoints are currently prefixed with `/api`** (e.g.
The API serves JSON data over HTTP(S). **All
API endpoints are currently prefixed with `/api`** (e.g.
`http://localhost:7777/api/stake`), but this may be configurable in the future.

#### Endpoint List
Expand Down Expand Up @@ -142,11 +163,6 @@ All JSON endpoints accept the URL query `indent=[true|false]`. For example,
for indentation may be specified with the `indentjson` string configuration
option.

### Web Interface

In addition to the API that is accessible via paths beginning with `/api`, an
HTML interface is served on the root path (`/`).

## Important Note About Mempool

Although there is mempool data collection and serving, it is **very important**
Expand All @@ -164,6 +180,13 @@ rebuilddb is a CLI app that performs a full blockchain scan that fills past
block data into a SQLite database. This functionality is included in the startup
of the dcrdata daemon, but may be called alone with rebuilddb.

### rebuilddb2

`rebuilddb2` is a CLI app used for maintenance of dcrdata's `dcrpg` database
(a.k.a. DB v2) that uses PostgreSQL to store a nearly complete record of the
Decred blockchain data. See the [README.md](./cmd/rebuilddb2/README.md) for
`rebuilddb2` for important usage information.

### scanblocks

scanblocks is a CLI app to scan the blockchain and save data into a JSON file.
Expand All @@ -176,25 +199,31 @@ comma-separated value (CSV) file.
`package dcrdataapi` defines the data types, with json tags, used by the JSON
API. This facilitates authoring of robust golang clients of the API.

`package dbtypes` defines the data types used by the DB backends to model the
block, transaction, and related blockchain data structures. Functions for
converting from standard Decred data types (e.g. `wire.MsgBlock`) are also
provided.

`package rpcutils` includes helper functions for interacting with a
`dcrrpcclient.Client`.
`rpcclient.Client`.

`package stakedb` defines the `StakeDatabase` and `ChainMonitor` types for
efficiently tracking live tickets, with the primary purpose of computing ticket
pool value quickly. It uses the `database.DB` type from
`github.com/decred/dcrd/database` with an ffldb storage backend from
`github.com/decred/dcrd/database/ffldb`. It also makes use of the `stake.Node`
type from `github.com/decred/dcrd/blockchain/stake`. The `ChainMonitor` type
handles connecting new blocks and chain reorganiation in response to notifications
handles connecting new blocks and chain reorganization in response to notifications
from dcrd.

`package txhelpers` includes helper functions for working with the common types
`dcrutil.Tx`, `dcrutil.Block`, `chainhash.Hash`, and others.

## Internal-use packages

Packages `blockdata` and `dcrsqlite` are currently designed only for internal use
by other dcrdata packages, but they may be of general value in the future.
Packages `blockdata` and `dcrsqlite` are currently designed only for internal
use internal use by other dcrdata packages, but they may be of general value in
the future.

`blockdata` defines:

Expand All @@ -206,6 +235,14 @@ by other dcrdata packages, but they may be of general value in the future.
* The `BlockDataSaver` interface required by `chainMonitor` for storage of
collected data.

`dcrpg` defines:

* The `ChainDB` type, which is the primary exported type from `dcrpg`, providing
an interface for a PostgreSQL database.
* A large set of lower-level functions to perform a range of queries given a
`*sql.DB` instance and various parameters.
* The internal package contains the raw SQL statements.

`dcrsqlite` defines:

* A `sql.DB` wrapper type (`DB`) with the necessary SQLite queries for
Expand All @@ -229,7 +266,7 @@ See the GitHub issue tracker and the [project milestones](https://github.com/dcr
## Requirements

* [Go](http://golang.org) 1.8.3 or newer.
* Running `dcrd` (>=0.6.1) synchronized to the current best block on the network.
* Running `dcrd` (>=1.1.0) synchronized to the current best block on the network.

## Installation

Expand Down Expand Up @@ -278,19 +315,35 @@ First, update the repository (assuming you have `master` checked out):
dep ensure
go build

Look carefully for errors with `git pull`, and reset locally modified files if necessary.
Look carefully for errors with `git pull`, and reset locally modified files
if necessary.

## Getting Started

Create configuration file.
### Create configuration file

Begin with the sample configuration file:

```bash
cp ./sample-dcrdata.conf ./dcrdata.conf
cp sample-dcrdata.conf dcrdata.conf
```

Then edit dcrdata.conf with your dcrd RPC settings.

Finally, launch the daemon and allow the databases to sync. This takes about an hour on the first time. On subsequent launches, only new blocks need to be scanned.
### Indexing the Blockchain

If dcrdata has not previously been run with the PostgreSQL database backend, it is necessary to perform a bulk import of blockchain data and generate table indexes.

- Create the dcrdata user and database in PostgreSQL (tables will be created automatically).
- Set your PostgreSQL credentials and host in both `./cmd/rebuilddb2/rebuilddb2.conf` and `./dcrdata.conf`.
- Run `rebuilddb2 -u` to bulk import and index.
- In case of errors, or schema changes, the tables may be dropped with `rebuilddb2 -D`.

### Starting dcrdata

Finally, launch the dcrdata daemon and allow the databases to sync new blocks.
The SQLite database sync takes about an hour the first time. On subsequent
launches, only new blocks are scanned.

```bash
./dcrdata
Expand All @@ -306,7 +359,8 @@ Yes, please! See the CONTRIBUTING.md file for details, but here's the gist of it
1. Commit and push to your repo.
1. Create a [pull request](https://github.com/dcrdata/dcrdata/compare).

Note that all dcrdata.org community and team members are expected to adhere to the code of conduct, described in the CODE_OF_CONDUCT file.
Note that all dcrdata.org community and team members are expected to adhere to
the code of conduct, described in the CODE_OF_CONDUCT file.

## License

Expand Down
2 changes: 1 addition & 1 deletion apimiddleware.go
Original file line number Diff line number Diff line change
Expand Up @@ -210,7 +210,7 @@ func TransactionIOIndexCtx(next http.Handler) http.Handler {
}

// AddressPathCtx returns a http.HandlerFunc that embeds the value at the url part
// {address} into the request context
// {address} into the request context.
func AddressPathCtx(next http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
address := chi.URLParam(r, "address")
Expand Down
34 changes: 18 additions & 16 deletions blockdata/blockdata.go
Original file line number Diff line number Diff line change
Expand Up @@ -16,6 +16,7 @@ import (
"github.com/decred/dcrd/dcrjson"
"github.com/decred/dcrd/dcrutil"
"github.com/decred/dcrd/rpcclient"
"github.com/decred/dcrd/wire"
)

// BlockData contains all the data collected by a Collector and stored
Expand Down Expand Up @@ -76,7 +77,7 @@ func (b *BlockData) ToBlockSummary() apitypes.BlockDataBasic {
}
func (b *BlockData) ToBlockExplorerSummary() apitypes.BlockExplorerBasic {
t := time.Unix(b.Header.Time, 0)
ftime := t.Format("1/_2/06 15:04:05")
ftime := t.Format("1/2/06 15:04:05")
extra := b.ExtraInfo
extra.FormattedTime = ftime
return apitypes.BlockExplorerBasic{
Expand Down Expand Up @@ -113,7 +114,7 @@ func NewCollector(dcrdChainSvr *rpcclient.Client, params *chaincfg.Params,
// CollectAPITypes uses CollectBlockInfo to collect block data, then organizes
// it into the BlockDataBasic and StakeInfoExtended and dcrdataapi types.
func (t *Collector) CollectAPITypes(hash *chainhash.Hash) (*apitypes.BlockDataBasic, *apitypes.StakeInfoExtended) {
blockDataBasic, feeInfoBlock, _, _, err := t.CollectBlockInfo(hash)
blockDataBasic, feeInfoBlock, _, _, _, err := t.CollectBlockInfo(hash)
if err != nil {
return nil, nil
}
Expand All @@ -136,10 +137,11 @@ func (t *Collector) CollectAPITypes(hash *chainhash.Hash) (*apitypes.BlockDataBa
// the block data required by Collect() that is specific to the block with the
// given hash.
func (t *Collector) CollectBlockInfo(hash *chainhash.Hash) (*apitypes.BlockDataBasic,
*dcrjson.FeeInfoBlock, *dcrjson.GetBlockHeaderVerboseResult, *apitypes.BlockExplorerExtraInfo, error) {
*dcrjson.FeeInfoBlock, *dcrjson.GetBlockHeaderVerboseResult,
*apitypes.BlockExplorerExtraInfo, *wire.MsgBlock, error) {
msgBlock, err := t.dcrdChainSvr.GetBlock(hash)
if err != nil {
return nil, nil, nil, nil, err
return nil, nil, nil, nil, nil, err
}
height := msgBlock.Header.Height
block := dcrutil.NewBlock(msgBlock)
Expand Down Expand Up @@ -178,7 +180,7 @@ func (t *Collector) CollectBlockInfo(hash *chainhash.Hash) (*apitypes.BlockDataB

blockHeaderResults, err := t.dcrdChainSvr.GetBlockHeaderVerbose(hash)
if err != nil {
return nil, nil, nil, nil, err
return nil, nil, nil, nil, nil, err
}

// Output
Expand All @@ -196,11 +198,11 @@ func (t *Collector) CollectBlockInfo(hash *chainhash.Hash) (*apitypes.BlockDataB
CoinSupply: int64(coinSupply),
NextBlockSubsidy: nbSubsidy,
}
return blockdata, feeInfoBlock, blockHeaderResults, extrainfo, err
return blockdata, feeInfoBlock, blockHeaderResults, extrainfo, msgBlock, err
}

// CollectHash collects chain data at the block with the specified hash.
func (t *Collector) CollectHash(hash *chainhash.Hash) (*BlockData, error) {
func (t *Collector) CollectHash(hash *chainhash.Hash) (*BlockData, *wire.MsgBlock, error) {
// In case of a very fast block, make sure previous call to collect is not
// still running, or dcrd may be mad.
t.mtx.Lock()
Expand All @@ -212,9 +214,9 @@ func (t *Collector) CollectHash(hash *chainhash.Hash) (*BlockData, error) {
}(time.Now())

// Info specific to the block hash
blockDataBasic, feeInfoBlock, blockHeaderVerbose, extra, err := t.CollectBlockInfo(hash)
blockDataBasic, feeInfoBlock, blockHeaderVerbose, extra, msgBlock, err := t.CollectBlockInfo(hash)
if err != nil {
return nil, err
return nil, nil, err
}

// Number of peer connection to chain server
Expand All @@ -238,11 +240,11 @@ func (t *Collector) CollectHash(hash *chainhash.Hash) (*BlockData, error) {
IdxBlockInWindow: int(height%winSize) + 1,
}

return blockdata, err
return blockdata, msgBlock, err
}

// Collect collects chain data at the current best block.
func (t *Collector) Collect() (*BlockData, error) {
func (t *Collector) Collect() (*BlockData, *wire.MsgBlock, error) {
// In case of a very fast block, make sure previous call to collect is not
// still running, or dcrd may be mad.
t.mtx.Lock()
Expand Down Expand Up @@ -271,13 +273,13 @@ func (t *Collector) Collect() (*BlockData, error) {
case bbs = <-toch:
case <-time.After(time.Second * 10):
log.Errorf("Timeout waiting for dcrd.")
return nil, errors.New("Timeout")
return nil, nil, errors.New("Timeout")
}

// Stake difficulty
stakeDiff, err := t.dcrdChainSvr.GetStakeDifficulty()
if err != nil {
return nil, err
return nil, nil, err
}

// estimatestakediff
Expand All @@ -288,9 +290,9 @@ func (t *Collector) Collect() (*BlockData, error) {
}

// Info specific to the block hash
blockDataBasic, feeInfoBlock, blockHeaderVerbose, extra, err := t.CollectBlockInfo(bbs.hash)
blockDataBasic, feeInfoBlock, blockHeaderVerbose, extra, msgBlock, err := t.CollectBlockInfo(bbs.hash)
if err != nil {
return nil, err
return nil, nil, err
}

// Number of peer connection to chain server
Expand All @@ -314,5 +316,5 @@ func (t *Collector) Collect() (*BlockData, error) {
IdxBlockInWindow: int(height%winSize) + 1,
}

return blockdata, err
return blockdata, msgBlock, err
}
6 changes: 3 additions & 3 deletions blockdata/chainmonitor.go
Original file line number Diff line number Diff line change
Expand Up @@ -160,14 +160,14 @@ out:
// relevant for the best block.
if chainHeight != height {
log.Infof("Behind on our collection...")
blockData, err = p.collector.CollectHash(hash)
blockData, _, err = p.collector.CollectHash(hash)
if err != nil {
log.Errorf("blockdata.CollectHash(hash) failed: %v", err.Error())
release()
break keepon
}
} else {
blockData, err = p.collector.Collect()
blockData, _, err = p.collector.Collect()
if err != nil {
log.Errorf("blockdata.Collect() failed: %v", err.Error())
release()
Expand All @@ -183,7 +183,7 @@ out:
for _, s := range savers {
if s != nil {
// save data to wherever the saver wants to put it
s.Store(blockData)
s.Store(blockData, msgBlock)
}
}

Expand Down
Loading

0 comments on commit e0c6ab4

Please sign in to comment.