Skip to content

Commit

Permalink
test: Make the ReadRows service in tests more modular (#1462)
Browse files Browse the repository at this point in the history
* test: begin to refactor the mock service (#1447)

* Use prettyPrintRequest from readRowsImpl1

* Define interfaces for creating a Bigtable service

* Remove keyFrom and keyTo from each test

* Define the service inline for standard keys errors

* Add a comment about the refactor steps I am doing

* Add a header to the service parameters file

* Use the ChunkGeneratorParameters interface

* Simplify the diff by creating local variables

* Remove comment

* Eliminate chunks per response constant

* Change imports

* Replace with as ServerImplementationInterface

* Update test/readrows.ts

Co-authored-by: Leah E. Cole <[email protected]>

* Add a second test for an error at a random pos

* Add some comments for documentation

* Chunk generation - add the parameter

* Added comments to all the interfaces

* Fix a regression bug from the merge

---------

Co-authored-by: Leah E. Cole <[email protected]>

* test: Eliminate the second ReadRows Service and use a generalized version of the first ReadRows (#1457)

* Use prettyPrintRequest from readRowsImpl1

* Define interfaces for creating a Bigtable service

* Remove keyFrom and keyTo from each test

* Define the service inline for standard keys errors

* Add a comment about the refactor steps I am doing

* Add a header to the service parameters file

* Use the ChunkGeneratorParameters interface

* Simplify the diff by creating local variables

* Remove comment

* Eliminate chunks per response constant

* Change imports

* Set up the second ReadRows service to use params

* Remove duplicate copy of generate chunks from serv

* Remove second copy of isKeyInRowSet

* Eliminate duplicate copies of the debug log

* Fix a bug for the to and from service parameters

Only ignore keyFrom and keyTo when they are undefined and not just when they are Falsy.

* Add cancel back into the mock service for testing

* Add variables to match service 1

* Add one more check to match the other service

* Remove usages of ReadRows2Impl

* Remove the new service

The old service has been generalized enough to mock correct server behaviour.

* Moved the position of the comment for 150 rows

* Eliminate setting keyTo and keyFrom

they are undefined anyway.

* Add a second test for an error at a random pos

* Add some comments for documentation

* Chunk generation - add the parameter

* Added comments to all the interfaces

* Delete file

* Change splitted to split

* Remove export

* Don’t rename the interfaces

There is no point

* Increase Windows timeout

* Provide functions to eliminate if statements

* Eliminate the if statements

Make a direct call to generateChunksFromRequest

* Revert "Eliminate the if statements"

This reverts commit 0996e89.

* Revert "Provide functions to eliminate if statements"

This reverts commit 4a4761f.

* Change any’s to string | undefined

* Eliminate duplicate code for setting timeouts

* remove only

* Update test/readrows.ts

Co-authored-by: Leah E. Cole <[email protected]>

* Update test/readrows.ts

Co-authored-by: Leah E. Cole <[email protected]>

* Update test/readrows.ts

Co-authored-by: Leah E. Cole <[email protected]>

* Update test/readrows.ts

Co-authored-by: Leah E. Cole <[email protected]>

* Update test/readrows.ts

Co-authored-by: Leah E. Cole <[email protected]>

* Update test/readrows.ts

Co-authored-by: Leah E. Cole <[email protected]>

* Update test/readrows.ts

Co-authored-by: Leah E. Cole <[email protected]>

---------

Co-authored-by: Leah E. Cole <[email protected]>

* test: Eliminate repeated if statements in the ReadRows mock service reducing the size of it significantly (#1460)

* Use prettyPrintRequest from readRowsImpl1

* Define interfaces for creating a Bigtable service

* Remove keyFrom and keyTo from each test

* Define the service inline for standard keys errors

* Add a comment about the refactor steps I am doing

* Add a header to the service parameters file

* Use the ChunkGeneratorParameters interface

* Simplify the diff by creating local variables

* Remove comment

* Eliminate chunks per response constant

* Change imports

* Set up the second ReadRows service to use params

* Remove duplicate copy of generate chunks from serv

* Remove second copy of isKeyInRowSet

* Eliminate duplicate copies of the debug log

* Fix a bug for the to and from service parameters

Only ignore keyFrom and keyTo when they are undefined and not just when they are Falsy.

* Add cancel back into the mock service for testing

* Add variables to match service 1

* Add one more check to match the other service

* Remove usages of ReadRows2Impl

* Remove the new service

The old service has been generalized enough to mock correct server behaviour.

* Moved the position of the comment for 150 rows

* Eliminate setting keyTo and keyFrom

they are undefined anyway.

* Add a second test for an error at a random pos

* Add some comments for documentation

* Chunk generation - add the parameter

* Added comments to all the interfaces

* Group the property getter into a function

* Group key selection into functions

* Fix typo: default key

* Documentation for isInRowSet

* Eliminate variable - move function inline

* Omit optional selector on stream

* Create ReadRowsWritableStream interface

* Use new interface, remove ServerWritableStream

* Don’t pass the whole stream into helpers

* Add a function for generating the chunks

* The debug log should be a pass through parameter

* Solve compiler errors resulting immediately from

merge

* Add debugLog parameter to various functions

* Add return type

* Remove exports - functions are only used in this

module

* Revise merge correction

* Remove TODO for the stream type

* Update the getKeyValue function

* Eliminate place where debug log is needed

* Run linter

* test: Break the ReadRows service down into classes instead of a single long function (#1461)

* Use prettyPrintRequest from readRowsImpl1

* Define interfaces for creating a Bigtable service

* Remove keyFrom and keyTo from each test

* Define the service inline for standard keys errors

* Add a comment about the refactor steps I am doing

* Add a header to the service parameters file

* Use the ChunkGeneratorParameters interface

* Simplify the diff by creating local variables

* Remove comment

* Eliminate chunks per response constant

* Change imports

* Set up the second ReadRows service to use params

* Remove duplicate copy of generate chunks from serv

* Remove second copy of isKeyInRowSet

* Eliminate duplicate copies of the debug log

* Fix a bug for the to and from service parameters

Only ignore keyFrom and keyTo when they are undefined and not just when they are Falsy.

* Add cancel back into the mock service for testing

* Add variables to match service 1

* Add one more check to match the other service

* Remove usages of ReadRows2Impl

* Remove the new service

The old service has been generalized enough to mock correct server behaviour.

* Moved the position of the comment for 150 rows

* Eliminate setting keyTo and keyFrom

they are undefined anyway.

* Add a second test for an error at a random pos

* Add some comments for documentation

* Chunk generation - add the parameter

* Added comments to all the interfaces

* Group the property getter into a function

* Group key selection into functions

* Fix typo: default key

* Documentation for isInRowSet

* Eliminate variable - move function inline

* Omit optional selector on stream

* Create ReadRowsWritableStream interface

* Use new interface, remove ServerWritableStream

* Don’t pass the whole stream into helpers

* Add a function for generating the chunks

* The debug log should be a pass through parameter

* Add some TODOs about how to address this next

* Pull send response into a class

* Create a dedicated class for defining the service

* Change name to readRowsRequestHandler

* Pull the function that generates the chunks into

Separate module

* Generate documentation

* Add more documentation to the Service class

* Add debugLog as a method parameter

* Solve compiler errors resulting immediately from

merge

* Add debugLog parameter to various functions

* Add return type

* Remove exports - functions are only used in this

module

* Revise merge correction

* Remove TODO for the stream type

* Update the getKeyValue function

* Eliminate place where debug log is needed

* Run linter

* Eliminate the function that creates a service

Add a factory method instead

* Add documentation for the constructor

* Remove the TODO

* Set the timeout for the test

* Increase the timeout

* Eliminate ternary statement

* 🦉 Updates from OwlBot post-processor

See https://github.com/googleapis/repo-automation-bots/blob/main/packages/owl-bot/README.md

* Add a couple of comments

* re-write the ternary expression

---------

Co-authored-by: Leah E. Cole <[email protected]>
Co-authored-by: Owl Bot <gcf-owl-bot[bot]@users.noreply.github.com>
  • Loading branch information
3 people authored Sep 23, 2024
1 parent 433a8e3 commit cf6906b
Show file tree
Hide file tree
Showing 4 changed files with 409 additions and 507 deletions.
216 changes: 125 additions & 91 deletions test/readrows.ts
Original file line number Diff line number Diff line change
Expand Up @@ -13,26 +13,50 @@
// limitations under the License.

import {before, describe, it} from 'mocha';
import {Bigtable, protos, Row, Table} from '../src';
import {Bigtable, Row, Table} from '../src';
import * as assert from 'assert';
import {Transform, PassThrough, pipeline} from 'stream';

import {GoogleError} from 'google-gax';
import {MockServer} from '../src/util/mock-servers/mock-server';
import {BigtableClientMockService} from '../src/util/mock-servers/service-implementations/bigtable-client-mock-service';
import {MockService} from '../src/util/mock-servers/mock-service';
import {debugLog, readRowsImpl} from './utils/readRowsImpl';
import {ServerWritableStream, UntypedHandleCall} from '@grpc/grpc-js';
import {readRowsImpl2} from './utils/readRowsImpl2';
import {ReadRowsImpl} from './utils/readRowsImpl';

import {
ReadRowsServiceParameters,
ReadRowsWritableStream,
} from '../test/utils/readRowsServiceParameters';
import * as mocha from 'mocha';

const DEBUG = process.env.BIGTABLE_TEST_DEBUG === 'true';

function debugLog(text: string) {
if (DEBUG) {
console.log(text);
}
}

// Define parameters for a standard Bigtable Mock service
const VALUE_SIZE = 1024 * 1024;
// we want each row to be split into 2 chunks of different sizes
const CHUNK_SIZE = 1023 * 1024 - 1;
const CHUNKS_PER_RESPONSE = 10;
const STANDARD_KEY_FROM = 0;
// 1000 rows must be enough to reproduce issues with losing the data and to create backpressure
const STANDARD_KEY_TO = 1000;
const STANDARD_SERVICE_WITHOUT_ERRORS: ReadRowsServiceParameters = {
keyFrom: STANDARD_KEY_FROM,
keyTo: STANDARD_KEY_TO,
valueSize: VALUE_SIZE,
chunkSize: CHUNK_SIZE,
chunksPerResponse: CHUNKS_PER_RESPONSE,
debugLog,
};

type PromiseVoid = Promise<void>;
interface ServerImplementationInterface {
(
server: ServerWritableStream<
protos.google.bigtable.v2.IReadRowsRequest,
protos.google.bigtable.v2.IReadRowsResponse
>
): PromiseVoid;
(server: ReadRowsWritableStream): PromiseVoid;
}

describe('Bigtable/ReadRows', () => {
Expand All @@ -53,15 +77,21 @@ describe('Bigtable/ReadRows', () => {
service = new BigtableClientMockService(server);
});

it('should create read stream and read synchronously', function (done) {
this.timeout(60000);
// helper function because some tests run slower
// on Windows and need a longer timeout
function setWindowsTestTimeout(test: mocha.Context) {
if (process.platform === 'win32') {
test.timeout(60000); // it runs much slower on Windows!
}
}

// 1000 rows must be enough to reproduce issues with losing the data and to create backpressure
const keyFrom = 0;
const keyTo = 1000;
it('should create read stream and read synchronously', function (done) {
setWindowsTestTimeout(this);

service.setService({
ReadRows: readRowsImpl(keyFrom, keyTo) as any,
ReadRows: ReadRowsImpl.createService(
STANDARD_SERVICE_WITHOUT_ERRORS
) as ServerImplementationInterface,
});

let receivedRowCount = 0;
Expand All @@ -81,19 +111,17 @@ describe('Bigtable/ReadRows', () => {
debugLog(`received row key ${key}`);
});
readStream.on('end', () => {
assert.strictEqual(receivedRowCount, keyTo - keyFrom);
assert.strictEqual(lastKeyReceived, keyTo - 1);
assert.strictEqual(receivedRowCount, STANDARD_KEY_TO - STANDARD_KEY_FROM);
assert.strictEqual(lastKeyReceived, STANDARD_KEY_TO - 1);
done();
});
});

it('should create read stream and read synchronously using Transform stream', done => {
// 1000 rows must be enough to reproduce issues with losing the data and to create backpressure
const keyFrom = 0;
const keyTo = 1000;

service.setService({
ReadRows: readRowsImpl(keyFrom, keyTo) as any,
ReadRows: ReadRowsImpl.createService(
STANDARD_SERVICE_WITHOUT_ERRORS
) as ServerImplementationInterface,
});

let receivedRowCount = 0;
Expand Down Expand Up @@ -128,25 +156,20 @@ describe('Bigtable/ReadRows', () => {
debugLog(`received row key ${key}`);
});
passThrough.on('end', () => {
assert.strictEqual(receivedRowCount, keyTo - keyFrom);
assert.strictEqual(lastKeyReceived, keyTo - 1);
assert.strictEqual(receivedRowCount, STANDARD_KEY_TO - STANDARD_KEY_FROM);
assert.strictEqual(lastKeyReceived, STANDARD_KEY_TO - 1);
done();
});

pipeline(readStream, transform, passThrough, () => {});
});

it('should create read stream and read asynchronously using Transform stream', function (done) {
if (process.platform === 'win32') {
this.timeout(60000); // it runs much slower on Windows!
}

// 1000 rows must be enough to reproduce issues with losing the data and to create backpressure
const keyFrom = 0;
const keyTo = 1000;

setWindowsTestTimeout(this);
service.setService({
ReadRows: readRowsImpl(keyFrom, keyTo) as any,
ReadRows: ReadRowsImpl.createService(
STANDARD_SERVICE_WITHOUT_ERRORS
) as ServerImplementationInterface,
});

let receivedRowCount = 0;
Expand Down Expand Up @@ -183,23 +206,22 @@ describe('Bigtable/ReadRows', () => {
debugLog(`received row key ${key}`);
});
passThrough.on('end', () => {
assert.strictEqual(receivedRowCount, keyTo - keyFrom);
assert.strictEqual(lastKeyReceived, keyTo - 1);
assert.strictEqual(receivedRowCount, STANDARD_KEY_TO - STANDARD_KEY_FROM);
assert.strictEqual(lastKeyReceived, STANDARD_KEY_TO - 1);
done();
});

pipeline(readStream, transform, passThrough, () => {});
});

it('should be able to stop reading from the read stream', done => {
// 1000 rows must be enough to reproduce issues with losing the data and to create backpressure
const keyFrom = 0;
const keyTo = 1000;
// pick any key to stop after
const stopAfter = 42;

service.setService({
ReadRows: readRowsImpl(keyFrom, keyTo) as any,
ReadRows: ReadRowsImpl.createService(
STANDARD_SERVICE_WITHOUT_ERRORS
) as ServerImplementationInterface,
});

let receivedRowCount = 0;
Expand Down Expand Up @@ -232,18 +254,14 @@ describe('Bigtable/ReadRows', () => {

// TODO: enable after https://github.com/googleapis/nodejs-bigtable/issues/1286 is fixed
it('should be able to stop reading from the read stream when reading asynchronously', function (done) {
if (process.platform === 'win32') {
this.timeout(600000); // it runs much slower on Windows!
}

// 1000 rows must be enough to reproduce issues with losing the data and to create backpressure
const keyFrom = 0;
const keyTo = 1000;
setWindowsTestTimeout(this);
// pick any key to stop after
const stopAfter = 420;

service.setService({
ReadRows: readRowsImpl(keyFrom, keyTo) as any,
ReadRows: ReadRowsImpl.createService(
STANDARD_SERVICE_WITHOUT_ERRORS
) as ServerImplementationInterface,
});

let receivedRowCount = 0;
Expand Down Expand Up @@ -294,61 +312,77 @@ describe('Bigtable/ReadRows', () => {
pipeline(readStream, transform, passThrough, () => {});
});

it('should silently resume after server or network error', done => {
// 1000 rows must be enough to reproduce issues with losing the data and to create backpressure
const keyFrom = 0;
const keyTo = 1000;
// the server will error after sending this chunk (not row)
const errorAfterChunkNo = 423;

service.setService({
ReadRows: readRowsImpl(keyFrom, keyTo, errorAfterChunkNo) as any,
});

let receivedRowCount = 0;
let lastKeyReceived: number | undefined;

const readStream = table.createReadStream();
readStream.on('error', (err: GoogleError) => {
done(err);
});
readStream.on('data', (row: Row) => {
++receivedRowCount;
const key = parseInt(row.id);
if (lastKeyReceived && key <= lastKeyReceived) {
done(new Error('Test error: keys are not in order'));
}
lastKeyReceived = key;
debugLog(`received row key ${key}`);
describe('should silently resume after server or network error', () => {
function runTest(done: Mocha.Done, errorAfterChunkNo: number) {
service.setService({
ReadRows: ReadRowsImpl.createService({
keyFrom: STANDARD_KEY_FROM,
keyTo: STANDARD_KEY_TO,
valueSize: VALUE_SIZE,
chunkSize: CHUNK_SIZE,
chunksPerResponse: CHUNKS_PER_RESPONSE,
errorAfterChunkNo,
debugLog,
}) as ServerImplementationInterface,
});
let receivedRowCount = 0;
let lastKeyReceived: number | undefined;

const readStream = table.createReadStream();
readStream.on('error', (err: GoogleError) => {
done(err);
});
readStream.on('data', (row: Row) => {
++receivedRowCount;
const key = parseInt(row.id);
if (lastKeyReceived && key <= lastKeyReceived) {
done(new Error('Test error: keys are not in order'));
}
lastKeyReceived = key;
debugLog(`received row key ${key}`);
});
readStream.on('end', () => {
assert.strictEqual(
receivedRowCount,
STANDARD_KEY_TO - STANDARD_KEY_FROM
);
assert.strictEqual(lastKeyReceived, STANDARD_KEY_TO - 1);
done();
});
}
it('with an error at a fixed position', function (done) {
setWindowsTestTimeout(this);
// Emits an error after enough chunks have been pushed to create back pressure
runTest(done, 423);
});
readStream.on('end', () => {
assert.strictEqual(receivedRowCount, keyTo - keyFrom);
assert.strictEqual(lastKeyReceived, keyTo - 1);
done();
it('with an error at a random position', function (done) {
this.timeout(200000);
// Emits an error after a random number of chunks.
const errorAfterChunkNo = Math.floor(Math.random() * 1000);
runTest(done, errorAfterChunkNo);
});
});

it('should return row data in the right order', done => {
// 150 rows must be enough to reproduce issues with losing the data and to create backpressure
const keyFrom = undefined;
const keyTo = undefined;
// the server will error after sending this chunk (not row)
const errorAfterChunkNo = 100;
it('should return row data in the right order', function (done) {
setWindowsTestTimeout(this);
const dataResults = [];

// TODO: Do not use `any` here, make it a more specific type and address downstream implications on the mock server.
// keyTo and keyFrom are not provided so they will be determined from
// the request that is passed in.
service.setService({
ReadRows: readRowsImpl2(
keyFrom,
keyTo,
errorAfterChunkNo
) as ServerImplementationInterface,
ReadRows: ReadRowsImpl.createService({
errorAfterChunkNo: 100, // the server will error after sending this chunk (not row)
valueSize: 1,
chunkSize: 1,
chunksPerResponse: 1,
debugLog,
}) as ServerImplementationInterface,
});
const sleep = (ms: number) => {
return new Promise(resolve => setTimeout(resolve, ms));
};
(async () => {
try {
// 150 rows must be enough to reproduce issues with losing the data and to create backpressure
const stream = table.createReadStream({
start: '00000000',
end: '00000150',
Expand Down
Loading

0 comments on commit cf6906b

Please sign in to comment.