diff --git a/README.md b/README.md index da4ea2ad7..ac7a2b35c 100644 --- a/README.md +++ b/README.md @@ -10,7 +10,7 @@ The Snowflake .NET connector supports the the following .NET framework and libra - .NET Framework 4.7.2 - .NET 6.0 -Please refer to the Notice section below for information about safe usage of the .NET Driver +Please refer to the [Notice](#notice) section below for information about safe usage of the .NET Driver # Coding conventions for the project @@ -68,926 +68,44 @@ Alternatively, packages can also be downloaded using Package Manager Console: PM> Install-Package Snowflake.Data ``` -# Testing the Connector +# Testing and Code Coverage -Before running tests, create a parameters.json file under Snowflake.Data.Tests\ directory. In this file, specify username, password and account info that tests will run against. Here is a sample parameters.json file +[Running tests](doc/Testing.md) -``` -{ - "testconnection": { - "SNOWFLAKE_TEST_USER": "snowman", - "SNOWFLAKE_TEST_PASSWORD": "XXXXXXX", - "SNOWFLAKE_TEST_ACCOUNT": "TESTACCOUNT", - "SNOWFLAKE_TEST_WAREHOUSE": "TESTWH", - "SNOWFLAKE_TEST_DATABASE": "TESTDB", - "SNOWFLAKE_TEST_SCHEMA": "TESTSCHEMA", - "SNOWFLAKE_TEST_ROLE": "TESTROLE", - "SNOWFLAKE_TEST_HOST": "testaccount.snowflakecomputing.com" - } -} -``` - -## Command Prompt - -The build solution file builds the connector and tests binaries. Issue the following command from the command line to run the tests. The test binary is located in the Debug directory if you built the solution file in Debug mode. - -```{r, engine='bash', code_block_name} -cd Snowflake.Data.Tests -dotnet test -f net6.0 -l "console;verbosity=normal" -``` - -Tests can also be run under code coverage: - -```{r, engine='bash', code_block_name} -dotnet-coverage collect "dotnet test --framework net6.0 --no-build -l console;verbosity=normal" --output net6.0_coverage.xml --output-format cobertura --settings coverage.config -``` - -You can run only specific suite of tests (integration or unit). - -Running unit tests: - -```bash -cd Snowflake.Data.Tests -dotnet test -l "console;verbosity=normal" --filter FullyQualifiedName~UnitTests -l console;verbosity=normal -``` +[Code coverage](doc/CodeCoverage.md) -Running integration tests: - -```bash -cd Snowflake.Data.Tests -dotnet test -l "console;verbosity=normal" --filter FullyQualifiedName~IntegrationTests -``` - -## Visual Studio 2017 - -Tests can also be run under Visual Studio 2017. Open the solution file in Visual Studio 2017 and run tests using Test Explorer. +--- # Usage ## Create a Connection -To connect to Snowflake, specify a valid connection string composed of key-value pairs separated by semicolons, -i.e "\=\;\=\...". - -**Note**: If the keyword or value contains an equal sign (=), you must precede the equal sign with another equal sign. For example, if the keyword is "key" and the value is "value_part1=value_part2", use "key=value_part1==value_part2". - -The following table lists all valid connection properties: -
- -| Connection Property | Required | Comment | -|--------------------------------| -------- |---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -| ACCOUNT | Yes | Your full account name might include additional segments that identify the region and cloud platform where your account is hosted | -| APPLICATION | No | **_Snowflake partner use only_**: Specifies the name of a partner application to connect through .NET. The name must match the following pattern: ^\[A-Za-z](\[A-Za-z0-9.-]){1,50}$ (one letter followed by 1 to 50 letter, digit, .,- or, \_ characters). | -| DB | No | | -| HOST | No | Specifies the hostname for your account in the following format: \.snowflakecomputing.com.
If no value is specified, the driver uses \.snowflakecomputing.com. | -| PASSWORD | Depends | Required if AUTHENTICATOR is set to `snowflake` (the default value) or the URL for native SSO through Okta. Ignored for all the other authentication types. | -| ROLE | No | | -| SCHEMA | No | | -| USER | Depends | If AUTHENTICATOR is set to `externalbrowser` this is optional. For native SSO through Okta, set this to the login name for your identity provider (IdP). | -| WAREHOUSE | No | | -| CONNECTION_TIMEOUT | No | Total timeout in seconds when connecting to Snowflake. The default is 300 seconds | -| RETRY_TIMEOUT | No | Total timeout in seconds for supported endpoints of retry policy. The default is 300 seconds. The value can only be increased from the default value or set to 0 for infinite timeout | -| MAXHTTPRETRIES | No | Maximum number of times to retry failed HTTP requests (default: 7). You can set `MAXHTTPRETRIES=0` to remove the retry limit, but doing so runs the risk of the .NET driver infinitely retrying failed HTTP calls. | -| CLIENT_SESSION_KEEP_ALIVE | No | Whether to keep the current session active after a period of inactivity, or to force the user to login again. If the value is `true`, Snowflake keeps the session active indefinitely, even if there is no activity from the user. If the value is `false`, the user must log in again after four hours of inactivity. The default is `false`. Setting this value overrides the server session property for the current session. | -| BROWSER_RESPONSE_TIMEOUT | No | Number to seconds to wait for authentication in an external browser (default: 120). | -| DISABLERETRY | No | Set this property to `true` to prevent the driver from reconnecting automatically when the connection fails or drops. The default value is `false`. | -| AUTHENTICATOR | No | The method of authentication. Currently supports the following values:
- snowflake (default): You must also set USER and PASSWORD.
- [the URL for native SSO through Okta](https://docs.snowflake.com/en/user-guide/admin-security-fed-auth-use.html#native-sso-okta-only): You must also set USER and PASSWORD.
- [externalbrowser](https://docs.snowflake.com/en/user-guide/admin-security-fed-auth-use.html#browser-based-sso): You must also set USER.
- [snowflake_jwt](https://docs.snowflake.com/en/user-guide/key-pair-auth.html): You must also set PRIVATE_KEY_FILE or PRIVATE_KEY.
- [oauth](https://docs.snowflake.com/en/user-guide/oauth.html): You must also set TOKEN. | -| VALIDATE_DEFAULT_PARAMETERS | No | Whether DB, SCHEMA and WAREHOUSE should be verified when making connection. Default to be true. | -| PRIVATE_KEY_FILE | Depends | The path to the private key file to use for key-pair authentication. Must be used in combination with AUTHENTICATOR=snowflake_jwt | -| PRIVATE_KEY_PWD | No | The passphrase to use for decrypting the private key, if the key is encrypted. | -| PRIVATE_KEY | Depends | The private key to use for key-pair authentication. Must be used in combination with AUTHENTICATOR=snowflake_jwt.
If the private key value includes any equal signs (=), make sure to replace each equal sign with two signs (==) to ensure that the connection string is parsed correctly. | -| TOKEN | Depends | The OAuth token to use for OAuth authentication. Must be used in combination with AUTHENTICATOR=oauth. | -| INSECUREMODE | No | Set to true to disable the certificate revocation list check. Default is false. | -| USEPROXY | No | Set to true if you need to use a proxy server. The default value is false.

This parameter was introduced in v2.0.4. | -| PROXYHOST | Depends | The hostname of the proxy server.

If USEPROXY is set to `true`, you must set this parameter.

This parameter was introduced in v2.0.4. | -| PROXYPORT | Depends | The port number of the proxy server.

If USEPROXY is set to `true`, you must set this parameter.

This parameter was introduced in v2.0.4. | -| PROXYUSER | No | The username for authenticating to the proxy server.

This parameter was introduced in v2.0.4. | -| PROXYPASSWORD | Depends | The password for authenticating to the proxy server.

If USEPROXY is `true` and PROXYUSER is set, you must set this parameter.

This parameter was introduced in v2.0.4. | -| NONPROXYHOSTS | No | The list of hosts that the driver should connect to directly, bypassing the proxy server. Separate the hostnames with a pipe symbol (\|). You can also use an asterisk (`*`) as a wildcard.
The host target value should fully match with any item from the proxy host list to bypass the proxy server.

This parameter was introduced in v2.0.4. | -| FILE_TRANSFER_MEMORY_THRESHOLD | No | The maximum number of bytes to store in memory used in order to provide a file encryption. If encrypting/decrypting file size exceeds provided value a temporary file will be created and the work will be continued in the temporary file instead of memory.
If no value provided 1MB will be used as a default value (that is 1048576 bytes).
It is possible to configure any integer value bigger than zero representing maximal number of bytes to reside in memory. | -| CLIENT_CONFIG_FILE | No | The location of the client configuration json file. In this file you can configure easy logging feature. | -| ALLOWUNDERSCORESINHOST | No | Specifies whether to allow underscores in account names. This impacts PrivateLink customers whose account names contain underscores. In this situation, you must override the default value by setting allowUnderscoresInHost to true. | -| QUERY_TAG | No | Optional string that can be used to tag queries and other SQL statements executed within a connection. The tags are displayed in the output of the QUERY_HISTORY , QUERY_HISTORY_BY_* functions. | -| MAXPOOLSIZE | No | Maximum number of connections in a pool. Default value is 10. `maxPoolSize` value cannot be lower than `minPoolSize` value. | -| MINPOOLSIZE | No | Expected minimum number of connections in pool. When you get a connection from the pool, more connections might be initialised in background to increase the pool size to `minPoolSize`. If you specify 0 or 1 there will be no attempts to create extra initialisations in background. The default value is 2. `maxPoolSize` value cannot be lower than `minPoolSize` value. The parameter is used only in a new version of connection pool. | -| CHANGEDSESSION | No | Specifies what should happen with a closed connection when some of its session variables are altered (e. g. you used `ALTER SESSION SET SCHEMA` to change the databese schema). The default behaviour is `OriginalPool` which means the session stays in the original pool. Currently no other option is possible. Parameter used only in a new version of connection pool. | -| WAITINGFORIDLESESSIONTIMEOUT | No | Timeout for waiting for an idle session when pool is full. It happens when there is no idle session and we cannot create a new one because of reaching `maxPoolSize`. The default value is 30 seconds. Usage of units possible and allowed are: e. g. `1000ms` (milliseconds), `15s` (seconds), `2m` (minutes) where seconds are default for a skipped postfix. Special values: `0` - immediate fail for new connection to open when session is full. You cannot specify infinite value. | -| EXPIRATIONTIMEOUT | No | Timeout for using each connection. Connections which last more than specified timeout are considered to be expired and are being removed from the pool. The default is 1 hour. Usage of units possible and allowed are: e. g. `360000ms` (milliseconds), `3600s` (seconds), `60m` (minutes) where seconds are default for a skipped postfix. Special values: `0` - immediate expiration of the connection just after its creation. Expiration timeout cannot be set to infinity. | -| POOLINGENABLED | No | Boolean flag indicating if the connection should be a part of a pool. The default value is `true`. | - -
- -### Password-based Authentication - -The following example demonstrates how to open a connection to Snowflake. This example uses a password for authentication. - -```cs -using (IDbConnection conn = new SnowflakeDbConnection()) -{ - conn.ConnectionString = "account=testaccount;user=testuser;password=XXXXX;db=testdb;schema=testschema"; - - conn.Open(); - - conn.Close(); -} -``` - - - -Beginning with version 2.0.18, the .NET connector uses Microsoft [DbConnectionStringBuilder](https://learn.microsoft.com/en-us/dotnet/api/system.data.oledb.oledbconnection.connectionstring?view=dotnet-plat-ext-6.0#remarks) to follow the .NET specification for escaping characters in connection strings. - -The following examples show how you can include different types of special characters in a connection string: - -- To include a single quote (') character: - - ```cs - string connectionString = String.Format( - "account=testaccount; " + - "user=testuser; " + - "password=test'password;" - ); - ``` - -- To include a double quote (") character: - - ```cs - string connectionString = String.Format( - "account=testaccount; " + - "user=testuser; " + - "password=test\"password;" - ); - ``` - -- To include a semicolon (;): - - ```cs - string connectionString = String.Format( - "account=testaccount; " + - "user=testuser; " + - "password=\"test;password\";" - ); - ``` - -- To include an equal sign (=): - - ```cs - string connectionString = String.Format( - "account=testaccount; " + - "user=testuser; " + - "password=test=password;" - ); - ``` - - Note that previously you needed to use a double equal sign (==) to escape the character. However, beginning with version 2.0.18, you can use a single equal size. - -### Other Authentication Methods - -If you are using a different method for authentication, see the examples below: - -- **Key-pair authentication** - - After setting up [key-pair authentication](https://docs.snowflake.com/en/user-guide/key-pair-auth.html), you can specify the - private key for authentication in one of the following ways: - - - Specify the file containing an unencrypted private key: - - ```cs - using (IDbConnection conn = new SnowflakeDbConnection()) - { - conn.ConnectionString = "account=testaccount;authenticator=snowflake_jwt;user=testuser;private_key_file={pathToThePrivateKeyFile};db=testdb;schema=testschema"; - - conn.Open(); - - conn.Close(); - } - ``` - - where: - - - `{pathToThePrivateKeyFile}` is the path to the file containing the unencrypted private key. - - - Specify the file containing an encrypted private key: - - ```cs - using (IDbConnection conn = new SnowflakeDbConnection()) - { - conn.ConnectionString = "account=testaccount;authenticator=snowflake_jwt;user=testuser;private_key_file={pathToThePrivateKeyFile};private_key_pwd={passwordForDecryptingThePrivateKey};db=testdb;schema=testschema"; - - conn.Open(); - - conn.Close(); - } - ``` - - where: - - - `{pathToThePrivateKeyFile}` is the path to the file containing the unencrypted private key. - - `{passwordForDecryptingThePrivateKey}` is the password for decrypting the private key. - - - Specify an unencrypted private key (read from a file): - - ```cs - using (IDbConnection conn = new SnowflakeDbConnection()) - { - string privateKeyContent = File.ReadAllText({pathToThePrivateKeyFile}); - - conn.ConnectionString = String.Format("account=testaccount;authenticator=snowflake_jwt;user=testuser;private_key={0};db=testdb;schema=testschema", privateKeyContent); - - conn.Open(); - - conn.Close(); - } - ``` - - where: - - - `{pathToThePrivateKeyFile}` is the path to the file containing the unencrypted private key. - -- **OAuth** - - After setting up [OAuth](https://docs.snowflake.com/en/user-guide/oauth.html), set `AUTHENTICATOR=oauth` and `TOKEN` to the - OAuth token in the connection string. - - ```cs - using (IDbConnection conn = new SnowflakeDbConnection()) - { - conn.ConnectionString = "account=testaccount;user=testuser;authenticator=oauth;token={oauthTokenValue};db=testdb;schema=testschema"; - - conn.Open(); - - conn.Close(); - } - ``` - - where: - - - `{oauthTokenValue}` is the oauth token to use for authentication. - -- **Browser-based SSO** - - In the connection string, set `AUTHENTICATOR=externalbrowser`. - Optionally, `USER` can be set. In that case only if user authenticated via external browser matches the one from configuration, authentication will complete. - - ```cs - using (IDbConnection conn = new SnowflakeDbConnection()) - { - conn.ConnectionString = "account=testaccount;authenticator=externalbrowser;user={login_name_for_IdP};db=testdb;schema=testschema"; - - conn.Open(); - - conn.Close(); - } - ``` - - where: - - - `{login_name_for_IdP}` is your login name for your IdP. - - You can override the default timeout after which external browser authentication is marked as failed. - The timeout prevents the infinite hang when the user does not provide the login details, e.g. when closing the browser tab. - To override, you can provide `BROWSER_RESPONSE_TIMEOUT` parameter (in seconds). - -- **Native SSO through Okta** - - In the connection string, set `AUTHENTICATOR` to the - [URL of the endpoint for your Okta account](https://docs.snowflake.com/en/user-guide/admin-security-fed-auth-use.html#label-native-sso-okta), - and set `USER` to the login name for your IdP. - - ```cs - using (IDbConnection conn = new SnowflakeDbConnection()) - { - conn.ConnectionString = "account=testaccount;authenticator={okta_url_endpoint};user={login_name_for_IdP};db=testdb;schema=testschema"; - - conn.Open(); - - conn.Close(); - } - ``` - - where: - - - `{okta_url_endpoint}` is the URL for the endpoint for your Okta account (e.g. `https://.okta.com`). - - `{login_name_for_IdP}` is your login name for your IdP. - -In v2.0.4 and later releases, you can configure the driver to connect through a proxy server. The following example configures the -driver to connect through the proxy server `myproxyserver` on port `8888`. The driver authenticates to the proxy server as the -user `test` with the password `test`: - -```cs -using (IDbConnection conn = new SnowflakeDbConnection()) -{ - conn.ConnectionString = "account=testaccount;user=testuser;password=XXXXX;db=testdb;schema=testschema;useProxy=true;proxyHost=myproxyserver;proxyPort=8888;proxyUser=test;proxyPassword=test"; - - conn.Open(); - - conn.Close(); -} -``` - -The NONPROXYHOSTS property could be set to specify if the server proxy should be bypassed by an specified host. This should be defined using the full host url or including the url + `*` wilcard symbol. - -Examples: - -- `*` (Bypassed all hosts from the proxy server) -- `*.snowflakecomputing.com` ('Bypass all host that ends with `snowflakecomputing.com`') -- `https:\\testaccount.snowflakecomputing.com` (Bypass proxy server using full host url). -- `*.myserver.com | *testaccount*` (You can specify multiple regex for the property divided by `|`) - - -> Note: The nonproxyhost value should match the full url including the http or https section. The '*' wilcard could be added to bypass the hostname successfully. - -- `myaccount.snowflakecomputing.com` (Not bypassed). -- `*myaccount.snowflakecomputing.com` (Bypassed). - +To create a connection +[Connecting and Authentication Methods](doc/Connecting.md) ## Using Connection Pools -Instead of creating a connection each time your client application needs to access Snowflake, you can define a cache of Snowflake connections that can be reused as needed. Connection pooling usually reduces the lag time to make a connection. However, it can slow down client failover to an alternative DNS when a DNS problem occurs. - -The Snowflake .NET driver provides the following functions for managing connection pools. - -| Function | Description | -| ---------------------------------------------- | ------------------------------------------------------------------------------------------------------- | -| SnowflakeDbConnectionPool.ClearAllPools() | Removes all connections from the connection pool. | -| SnowflakeDbConnection.SetMaxPoolSize(n) | Sets the maximum number of connections for the connection pool, where _n_ is the number of connections. | -| SnowflakeDBConnection.SetTimeout(n) | Sets the number of seconds to keep an unresponsive connection in the connection pool. | -| SnowflakeDbConnectionPool.GetCurrentPoolSize() | Returns the number of connections currently in the connection pool. | -| SnowflakeDbConnectionPool.SetPooling() | Determines whether to enable (`true`) or disable (`false`) connecing pooling. Default: `true`. | - -The following sample demonstrates how to monitor the size of a connection pool as connections are added and dropped from the pool. - -```cs -public void TestConnectionPoolClean() -{ - SnowflakeDbConnectionPool.ClearAllPools(); - SnowflakeDbConnectionPool.SetMaxPoolSize(2); - var conn1 = new SnowflakeDbConnection(); - conn1.ConnectionString = ConnectionString; - conn1.Open(); - Assert.AreEqual(ConnectionState.Open, conn1.State); - - var conn2 = new SnowflakeDbConnection(); - conn2.ConnectionString = ConnectionString + " retryCount=1"; - conn2.Open(); - Assert.AreEqual(ConnectionState.Open, conn2.State); - Assert.AreEqual(0, SnowflakeDbConnectionPool.GetCurrentPoolSize()); - conn1.Close(); - conn2.Close(); - Assert.AreEqual(2, SnowflakeDbConnectionPool.GetCurrentPoolSize()); - var conn3 = new SnowflakeDbConnection(); - conn3.ConnectionString = ConnectionString + " retryCount=2"; - conn3.Open(); - Assert.AreEqual(ConnectionState.Open, conn3.State); - - var conn4 = new SnowflakeDbConnection(); - conn4.ConnectionString = ConnectionString + " retryCount=3"; - conn4.Open(); - Assert.AreEqual(ConnectionState.Open, conn4.State); - - conn3.Close(); - Assert.AreEqual(2, SnowflakeDbConnectionPool.GetCurrentPoolSize()); - conn4.Close(); - Assert.AreEqual(2, SnowflakeDbConnectionPool.GetCurrentPoolSize()); - - Assert.AreEqual(ConnectionState.Closed, conn1.State); - Assert.AreEqual(ConnectionState.Closed, conn2.State); - Assert.AreEqual(ConnectionState.Closed, conn3.State); - Assert.AreEqual(ConnectionState.Closed, conn4.State); -} -``` - -## Mapping .NET and Snowflake Data Types - -The .NET driver supports the following mappings from .NET to Snowflake data types. - -| .NET Framekwork Data Type | Data Type in Snowflake | -| ------------------------- | ---------------------- | -| `int`, `long` | `NUMBER(38, 0)` | -| `decimal` | `NUMBER(38, )` | -| `double` | `REAL` | -| `string` | `TEXT` | -| `bool` | `BOOLEAN` | -| `byte` | `BINARY` | -| `datetime` | `DATE` | - -## Arrow data format - -The .NET connector, starting with v2.1.3, supports the [Arrow data format](https://arrow.apache.org/) -as a [preview](https://docs.snowflake.com/en/release-notes/preview-features) feature for data transfers -between Snowflake and a .NET client. The Arrow data format avoids extra -conversions between binary and textual representations of the data. The Arrow -data format can improve performance and reduce memory consumption in clients. - -The data format is controlled by the -DOTNET_QUERY_RESULT_FORMAT parameter. To use Arrow format, execute: - -```snowflake --- at the session level -ALTER SESSION SET DOTNET_QUERY_RESULT_FORMAT = ARROW; --- or at the user level -ALTER USER SET DOTNET_QUERY_RESULT_FORMAT = ARROW; --- or at the account level -ALTER ACCOUNT SET DOTNET_QUERY_RESULT_FORMAT = ARROW; -``` - -The valid values for the parameter are: - -- ARROW -- JSON (default) - -## Run a Query and Read Data - -```cs -using (IDbConnection conn = new SnowflakeDbConnection()) -{ - conn.ConnectionString = connectionString; - conn.Open(); - - IDbCommand cmd = conn.CreateCommand(); - cmd.CommandText = "select * from t"; - IDataReader reader = cmd.ExecuteReader(); - - while(reader.Read()) - { - Console.WriteLine(reader.GetString(0)); - } - - conn.Close(); -} -``` - -Note that for a `TIME` column, the reader returns a `System.DateTime` value. If you need a `System.TimeSpan` column, call the -`getTimeSpan` method in `SnowflakeDbDataReader`. This method was introduced in the v2.0.4 release. - -Note that because this method is not available in the generic `IDataReader` interface, you must cast the object as -`SnowflakeDbDataReader` before calling the method. For example: - -```cs -TimeSpan timeSpanTime = ((SnowflakeDbDataReader)reader).GetTimeSpan(13); -``` - -## Execute a query asynchronously on the server - -You can run the query asynchronously on the server. The server responds immediately with `queryId` and continues to execute the query asynchronously. -Then you can use this `queryId` to check the query status or wait until the query is completed and get the results. -It is fine to start the query in one session and continue to query for the results in another one based on the queryId. - -**Note**: There are 2 levels of asynchronous execution. One is asynchronous execution in terms of C# language (`async await`). -Another is asynchronous execution of the query by the server (you can recognize it by `InAsyncMode` containing method names, e. g. `ExecuteInAsyncMode`, `ExecuteAsyncInAsyncMode`). - -Example of synchronous code starting a query to be executed asynchronously on the server: -```cs -using (SnowflakeDbConnection conn = new SnowflakeDbConnection("account=testaccount;username=testusername;password=testpassword")) -{ - conn.Open(); - SnowflakeDbCommand cmd = (SnowflakeDbCommand)conn.CreateCommand(); - cmd.CommandText = "SELECT ..."; - var queryId = cmd.ExecuteInAsyncMode(); - // ... -} -``` - -Example of asynchronous code starting a query to be executed asynchronously on the server: -```cs -using (SnowflakeDbConnection conn = new SnowflakeDbConnection("account=testaccount;username=testusername;password=testpassword")) -{ - await conn.OpenAsync(CancellationToken.None).ConfigureAwait(false); - SnowflakeDbCommand cmd = (SnowflakeDbCommand)conn.CreateCommand()) - cmd.CommandText = "SELECT ..."; - var queryId = await cmd.ExecuteAsyncInAsyncMode(CancellationToken.None).ConfigureAwait(false); - // ... -} -``` - -You can check the status of a query executed asynchronously on the server either in synchronous code: -```cs -var queryStatus = cmd.GetQueryStatus(queryId); -Assert.IsTrue(conn.IsStillRunning(queryStatus)); // assuming that the query is still running -Assert.IsFalse(conn.IsAnError(queryStatus)); // assuming that the query has not finished with error -``` -or the same in an asynchronous code: -```cs -var queryStatus = await cmd.GetQueryStatusAsync(queryId, CancellationToken.None).ConfigureAwait(false); -Assert.IsTrue(conn.IsStillRunning(queryStatus)); // assuming that the query is still running -Assert.IsFalse(conn.IsAnError(queryStatus)); // assuming that the query has not finished with error -``` - -The following example shows how to get query results. -The operation will repeatedly check the query status until the query is completed or timeout happened or reaching the maximum number of attempts. -The synchronous code example: -```cs -DbDataReader reader = cmd.GetResultsFromQueryId(queryId); -``` -and the asynchronous code example: -```cs -DbDataReader reader = await cmd.GetResultsFromQueryIdAsync(queryId, CancellationToken.None).ConfigureAwait(false); -``` - -**Note**: GET/PUT operations are currently not enabled for asynchronous executions. - -## Executing a Batch of SQL Statements (Multi-Statement Support) +[Multiple Connection Pools](doc/ConnectionPooling.md) -With version 2.0.18 and later of the .NET connector, you can send -a batch of SQL statements, separated by semicolons, -to be executed in a single request. +[Single Connection Pool](doc/ConnectionPoolingDeprecated.md) - `deprecated` -**Note**: Snowflake does not currently support variable binding in multi-statement SQL requests. +## Data Types and Formats ---- - -**Note** +[Data Types and Data Formats](doc/DataTypes.md) -By default, Snowflake returns an error for queries issued with multiple statements to protect against SQL injection attacks. The multiple statements feature makes your system more vulnerable to SQL injections, and so it should be used carefully. You can reduce the risk by using the MULTI_STATEMENT_COUNT parameter to specify the number of statements to be executed, which makes it more difficult to inject a statement by appending to it. - ---- - -You can execute multiple statements as a batch in the same way you execute queries with single statements, except that the query string contains multiple statements separated by semicolons. Note that multiple statements execute sequentially, not in parallel. - -You can set this parameter at the session level using the following command: - -``` -ALTER SESSION SET MULTI_STATEMENT_COUNT = <0/1>; -``` - -where: - -- **0**: Enables an unspecified number of SQL statements in a query. - - Using this value allows batch queries to contain any number of SQL statements without needing to specify the MULTI_STATEMENT_COUNT statement parameter. However, be aware that using this value reduces the protection against SQL injection attacks. - -- **1**: Allows one SQL statement or a specified number of statement in a query string (default). - - You must include MULTI_STATEMENT_COUNT as a statement parameter to specify the number of statements included when the query string contains more than one statement. If the number of statements sent in the query string does not match the MULTI_STATEMENT_COUNT value, the .NET driver rejects the request. You can, however, omit this parameter if you send a single statement. - -The following example sets the MULTI_STATEMENT_COUNT session parameter to 1. Then for an individual command, it sets MULTI_STATEMENT_COUNT=3 to indicate that the query contains precisely three SQL commands. The query string, `cmd.CommandText` , then contains the three statements to execute. - -```cs -using (IDbConnection conn = new SnowflakeDbConnection()) -{ - conn.ConnectionString = ConnectionString; - conn.Open(); - IDbCommand cmd = conn.CreateCommand(); - cmd.CommandText = "ALTER SESSION SET MULTI_STATEMENT_COUNT = 1;"; - cmd.ExecuteNonQuery(); - conn.Close(); -} - -using (DbCommand cmd = conn.CreateCommand()) -{ - // Set statement count - var stmtCountParam = cmd.CreateParameter(); - stmtCountParam.ParameterName = "MULTI_STATEMENT_COUNT"; - stmtCountParam.DbType = DbType.Int16; - stmtCountParam.Value = 3; - cmd.Parameters.Add(stmtCountParam); - cmd.CommandText = "CREATE OR REPLACE TABLE test(n int); INSERT INTO test values(1), (2); SELECT * FROM test ORDER BY n; - DbDataReader reader = cmd.ExecuteReader(); - do - { - if (reader.HasRow) - { - while (reader.Read()) - { - // read data - } - } - } - while (reader.NextResult()); -} -``` - -## Bind Parameter - -**Note**: Snowflake does not currently support variable binding in multi-statement SQL requests. - -This example shows how bound parameters are converted from C# data types to -Snowflake data types. For example, if the data type of the Snowflake column -is INTEGER, then you can bind C# data types Int32 or Int16. - -This example inserts 3 rows into a table with one column. - -```cs -using (IDbConnection conn = new SnowflakeDbConnection()) -{ - conn.ConnectionString = connectionString; - conn.Open(); - - IDbCommand cmd = conn.CreateCommand(); - cmd.CommandText = "create or replace table T(cola int)"; - int count = cmd.ExecuteNonQuery(); - Assert.AreEqual(0, count); - - IDbCommand cmd = conn.CreateCommand(); - cmd.CommandText = "insert into t values (?), (?), (?)"; - - var p1 = cmd.CreateParameter(); - p1.ParameterName = "1"; - p1.Value = 10; - p1.DbType = DbType.Int32; - cmd.Parameters.Add(p1); - - var p2 = cmd.CreateParameter(); - p2.ParameterName = "2"; - p2.Value = 10000L; - p2.DbType = DbType.Int32; - cmd.Parameters.Add(p2); - - var p3 = cmd.CreateParameter(); - p3.ParameterName = "3"; - p3.Value = (short)1; - p3.DbType = DbType.Int16; - cmd.Parameters.Add(p3); - - var count = cmd.ExecuteNonQuery(); - Assert.AreEqual(3, count); - - cmd.CommandText = "drop table if exists T"; - count = cmd.ExecuteNonQuery(); - Assert.AreEqual(0, count); - - conn.Close(); -} -``` +## Querying Data -## Bind Array Variables +[Running Queries and Reading Results](doc/QueryingData.md) -The sample code creates a table with a single integer column and then uses array binding to populate the table with values 0 to 70000. +## Stage Files -```cs -using (IDbConnection conn = new SnowflakeDbConnection()) -{ - conn.ConnectionString = ConnectionString; - conn.Open(); - - using (IDbCommand cmd = conn.CreateCommand()) - { - cmd.CommandText = "create or replace table putArrayBind(colA integer)"; - cmd.ExecuteNonQuery(); - - string insertCommand = "insert into putArrayBind values (?)"; - cmd.CommandText = insertCommand; - - int total = 70000; - - List arrint = new List(); - for (int i = 0; i < total; i++) - { - arrint.Add(i); - } - var p1 = cmd.CreateParameter(); - p1.ParameterName = "1"; - p1.DbType = DbType.Int16; - p1.Value = arrint.ToArray(); - cmd.Parameters.Add(p1); - - count = cmd.ExecuteNonQuery(); // count = 70000 - } - - conn.Close(); -} -``` - -## PUT local files to stage - -PUT command can be used to upload files of a local directory or a single local file to the Snowflake stages (named, internal table stage or internal user stage). -Such staging files can be used to load data into a table. -More on this topic: [File staging with PUT](https://docs.snowflake.com/en/sql-reference/sql/put). - -In the driver the command can be executed in a bellow way: - -```cs -using (IDbConnection conn = new SnowflakeDbConnection()) -{ - try - { - conn.ConnectionString = ""; - conn.Open(); - var cmd = (SnowflakeDbCommand)conn.CreateCommand(); // cast allows get QueryId from the command - - cmd.CommandText = "PUT file://some_data.csv @my_schema.my_stage AUTO_COMPRESS=TRUE"; - var reader = cmd.ExecuteReader(); - Assert.IsTrue(reader.read()); - Assert.DoesNotThrow(() => Guid.Parse(cmd.GetQueryId())); - } - catch (SnowflakeDbException e) - { - Assert.DoesNotThrow(() => Guid.Parse(e.QueryId)); // when failed - Assert.That(e.InnerException.GetType(), Is.EqualTo(typeof(FileNotFoundException))); - } -``` - -In case of a failure a SnowflakeDbException exception will be thrown with affected QueryId if possible. -If it was after the query got executed this exception will be a SnowflakeDbException containing affected QueryId. -In case of the initial phase of execution QueryId might not be provided. -Inner exception (if applicable) will provide some details on the failure cause and -it will be for example: FileNotFoundException, DirectoryNotFoundException. - -## GET stage files - -GET command allows to download stage directories or files to a local directory. -It can be used in connection with named stage, table internal stage or user stage. -Detailed information on the command: [Downloading files with GET](https://docs.snowflake.com/en/sql-reference/sql/get). - -To use the command in a driver similar code can be executed in a client app: - -```cs - try - { - conn.ConnectionString = ""; - conn.Open(); - var cmd = (SnowflakeDbCommand)conn.CreateCommand(); // cast allows get QueryId from the command - - cmd.CommandText = "GET @my_schema.my_stage/stage_file.csv file://local_file.csv AUTO_COMPRESS=TRUE"; - var reader = cmd.ExecuteReader(); - Assert.IsTrue(reader.read()); // True on success, False if failure - Assert.DoesNotThrow(() => Guid.Parse(cmd.GetQueryId())); - } - catch (SnowflakeDbException e) - { - Assert.DoesNotThrow(() => Guid.Parse(e.QueryId)); // on failure - } -``` - -In case of a failure a SnowflakeDbException will be thrown with affected QueryId if possible. -When no technical or syntax errors occurred but the DBDataReader has no data to process it returns False -without throwing an exception. - -## Close the Connection - -To close the connection, call the `Close` method of `SnowflakeDbConnection`. - -If you want to avoid blocking threads while the connection is closing, call the `CloseAsync` method instead, passing in a -`CancellationToken`. This method was introduced in the v2.0.4 release. - -Note that because this method is not available in the generic `IDbConnection` interface, you must cast the object as -`SnowflakeDbConnection` before calling the method. For example: - -```cs -CancellationTokenSource cancellationTokenSource = new CancellationTokenSource(); -// Close the connection -((SnowflakeDbConnection)conn).CloseAsync(cancellationTokenSource.Token); -``` - -## Evict the Connection - -For the open connection, call the `PreventPooling()` to mark the connection to be removed on close instead being still pooled. -The busy sessions counter will be decreased when the connection is closed. +[PUT and GET Files to/from Stage](doc/StageFiles.md) ## Logging -The Snowflake Connector for .NET uses [log4net](http://logging.apache.org/log4net/) as the logging framework. - -Here is a sample app.config file that uses [log4net](http://logging.apache.org/log4net/) - -```xml - -
- - - - - - - - - - - - - - - - - - - - - -``` - -## Easy logging - -The Easy Logging feature lets you change the log level for all driver classes and add an extra file appender for logs from the driver's classes at runtime. You can specify the log levels and the directory in which to save log files in a configuration file (default: `sf_client_config.json`). - -You typically change log levels only when debugging your application. - -**Note** -This logging configuration file features support only the following log levels: +[Logging and Easy Logging Configuration](doc/Logging.md) -- OFF -- ERROR -- WARNING -- INFO -- DEBUG -- TRACE - -This configuration file uses JSON to define the `log_level` and `log_path` logging parameters, as follows: - -```json -{ - "common": { - "log_level": "INFO", - "log_path": "c:\\some-path\\some-directory" - } -} -``` - -where: - -- `log_level` is the desired logging level. -- `log_path` is the location to store the log files. The driver automatically creates a `dotnet` subdirectory in the specified `log_path`. For example, if you set log_path to `c:\logs`, the drivers creates the `c:\logs\dotnet` directory and stores the logs there. - -The driver looks for the location of the configuration file in the following order: - -- `CLIENT_CONFIG_FILE` connection parameter, containing the full path to the configuration file (e.g. `"ACCOUNT=test;USER=test;PASSWORD=test;CLIENT_CONFIG_FILE=C:\\some-path\\client_config.json;"`) -- `SF_CLIENT_CONFIG_FILE` environment variable, containing the full path to the configuration file. -- .NET driver/application directory, where the file must be named `sf_client_config.json`. -- User’s home directory, where the file must be named `sf_client_config.json`. - -**Note** -To enhance security, the driver no longer searches a temporary directory for easy logging configurations. Additionally, the driver now requires the logging configuration file on Unix-style systems to limit file permissions to allow only the file owner to modify the files (such as `chmod 0600` or `chmod 0644`). - -To minimize the number of searches for a configuration file, the driver reads the file only for: - -- The first connection. -- The first connection with `CLIENT_CONFIG_FILE` parameter. - -The extra logs are stored in a `dotnet` subfolder of the specified directory, such as `C:\some-path\some-directory\dotnet`. - -If a client uses the `log4net` library for application logging, enabling easy logging affects the log level in those logs as well. - -## Getting the code coverage - -1. Go to .NET project directory - -2. Clean the directory - -``` -dotnet clean snowflake-connector-net.sln && dotnet nuget locals all --clear -``` - -3. Create parameters.json containing connection info for AWS, AZURE, or GCP account and place inside the Snowflake.Data.Tests folder - -4. Build the project for .NET6 - -``` -dotnet build snowflake-connector-net.sln /p:DebugType=Full -``` - -5. Run dotnet-cover on the .NET6 build - -``` -dotnet-coverage collect "dotnet test --framework net6.0 --no-build -l console;verbosity=normal" --output net6.0_AWS_coverage.xml --output-format cobertura --settings coverage.config -``` - -6. Build the project for .NET Framework - -``` -msbuild snowflake-connector-net.sln -p:Configuration=Release -``` - -7. Run dotnet-cover on the .NET Framework build - -``` -dotnet-coverage collect "dotnet test --framework net472 --no-build -l console;verbosity=normal" --output net472_AWS_coverage.xml --output-format cobertura --settings coverage.config -``` - -
-Repeat steps 3, 5, and 7 for the other cloud providers.
-Note: no need to rebuild the connector again.

- -For Azure:
- -3. Create parameters.json containing connection info for AZURE account and place inside the Snowflake.Data.Tests folder - -4. Run dotnet-cover on the .NET6 build - -``` -dotnet-coverage collect "dotnet test --framework net6.0 --no-build -l console;verbosity=normal" --output net6.0_AZURE_coverage.xml --output-format cobertura --settings coverage.config -``` - -7. Run dotnet-cover on the .NET Framework build - -``` -dotnet-coverage collect "dotnet test --framework net472 --no-build -l console;verbosity=normal" --output net472_AZURE_coverage.xml --output-format cobertura --settings coverage.config -``` - -
-For GCP:
- -3. Create parameters.json containing connection info for GCP account and place inside the Snowflake.Data.Tests folder - -4. Run dotnet-cover on the .NET6 build - -``` -dotnet-coverage collect "dotnet test --framework net6.0 --no-build -l console;verbosity=normal" --output net6.0_GCP_coverage.xml --output-format cobertura --settings coverage.config -``` - -7. Run dotnet-cover on the .NET Framework build - -``` -dotnet-coverage collect "dotnet test --framework net472 --no-build -l console;verbosity=normal" --output net472_GCP_coverage.xml --output-format cobertura --settings coverage.config -``` +--------------- ## Notice @@ -1018,4 +136,13 @@ dotnet-coverage collect "dotnet test --framework net472 --no-build -l console;ve Snowflake has identified an issue where the driver is globally enforcing TLS 1.2 and certificate revocation checks with the .NET Driver v1.2.1 and earlier versions. Starting with v2.0.0, the driver will set these locally. +4. Certificate Revocation List not performed where insecureMode was disabled - + Snowflake has identified vulnerability where the checks against the Certificate Revocation List (CRL) + were not performed where the insecureMode flag was set to false, which is the default setting. + Note that the driver is now targeting .NET 6.0. When upgrading, you might also need to run “Update-Package -reinstall” to update the dependencies. + +See more: +* [Security Policy](SECURITY.md) +* [Security Advisories](/security/advisories) + diff --git a/doc/CodeCoverage.md b/doc/CodeCoverage.md new file mode 100644 index 000000000..497219494 --- /dev/null +++ b/doc/CodeCoverage.md @@ -0,0 +1,72 @@ +## Getting the code coverage + +1. Go to .NET project directory + +2. Clean the directory + +``` +dotnet clean snowflake-connector-net.sln && dotnet nuget locals all --clear +``` + +3. Create parameters.json containing connection info for AWS, AZURE, or GCP account and place inside the Snowflake.Data.Tests folder + +4. Build the project for .NET6 + +``` +dotnet build snowflake-connector-net.sln /p:DebugType=Full +``` + +5. Run dotnet-cover on the .NET6 build + +``` +dotnet-coverage collect "dotnet test --framework net6.0 --no-build -l console;verbosity=normal" --output net6.0_AWS_coverage.xml --output-format cobertura --settings coverage.config +``` + +6. Build the project for .NET Framework + +``` +msbuild snowflake-connector-net.sln -p:Configuration=Release +``` + +7. Run dotnet-cover on the .NET Framework build + +``` +dotnet-coverage collect "dotnet test --framework net472 --no-build -l console;verbosity=normal" --output net472_AWS_coverage.xml --output-format cobertura --settings coverage.config +``` + +
+Repeat steps 3, 5, and 7 for the other cloud providers.
+Note: no need to rebuild the connector again.

+ +For Azure:
+ +3. Create parameters.json containing connection info for AZURE account and place inside the Snowflake.Data.Tests folder + +4. Run dotnet-cover on the .NET6 build + +``` +dotnet-coverage collect "dotnet test --framework net6.0 --no-build -l console;verbosity=normal" --output net6.0_AZURE_coverage.xml --output-format cobertura --settings coverage.config +``` + +7. Run dotnet-cover on the .NET Framework build + +``` +dotnet-coverage collect "dotnet test --framework net472 --no-build -l console;verbosity=normal" --output net472_AZURE_coverage.xml --output-format cobertura --settings coverage.config +``` + +
+For GCP:
+ +3. Create parameters.json containing connection info for GCP account and place inside the Snowflake.Data.Tests folder + +4. Run dotnet-cover on the .NET6 build + +``` +dotnet-coverage collect "dotnet test --framework net6.0 --no-build -l console;verbosity=normal" --output net6.0_GCP_coverage.xml --output-format cobertura --settings coverage.config +``` + +7. Run dotnet-cover on the .NET Framework build + +``` +dotnet-coverage collect "dotnet test --framework net472 --no-build -l console;verbosity=normal" --output net472_GCP_coverage.xml --output-format cobertura --settings coverage.config +``` diff --git a/doc/Connecting.md b/doc/Connecting.md new file mode 100644 index 000000000..6381be794 --- /dev/null +++ b/doc/Connecting.md @@ -0,0 +1,275 @@ +## Connecting + +To connect to Snowflake, specify a valid connection string composed of key-value pairs separated by semicolons, +i.e "\=\;\=\...". + +**Note**: If the keyword or value contains an equal sign (=), you must precede the equal sign with another equal sign. For example, if the keyword is "key" and the value is "value_part1=value_part2", use "key=value_part1==value_part2". + +The following table lists all valid connection properties: +
+ +| Connection Property | Required | Comment | +|--------------------------------| -------- |---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| ACCOUNT | Yes | Your full account name might include additional segments that identify the region and cloud platform where your account is hosted | +| APPLICATION | No | **_Snowflake partner use only_**: Specifies the name of a partner application to connect through .NET. The name must match the following pattern: ^\[A-Za-z](\[A-Za-z0-9.-]){1,50}$ (one letter followed by 1 to 50 letter, digit, .,- or, \_ characters). | +| DB | No | | +| HOST | No | Specifies the hostname for your account in the following format: \.snowflakecomputing.com.
If no value is specified, the driver uses \.snowflakecomputing.com. | +| PASSWORD | Depends | Required if AUTHENTICATOR is set to `snowflake` (the default value) or the URL for native SSO through Okta. Ignored for all the other authentication types. | +| ROLE | No | | +| SCHEMA | No | | +| USER | Depends | If AUTHENTICATOR is set to `externalbrowser` this is optional. For native SSO through Okta, set this to the login name for your identity provider (IdP). | +| WAREHOUSE | No | | +| CONNECTION_TIMEOUT | No | Total timeout in seconds when connecting to Snowflake. The default is 300 seconds | +| RETRY_TIMEOUT | No | Total timeout in seconds for supported endpoints of retry policy. The default is 300 seconds. The value can only be increased from the default value or set to 0 for infinite timeout | +| MAXHTTPRETRIES | No | Maximum number of times to retry failed HTTP requests (default: 7). You can set `MAXHTTPRETRIES=0` to remove the retry limit, but doing so runs the risk of the .NET driver infinitely retrying failed HTTP calls. | +| CLIENT_SESSION_KEEP_ALIVE | No | Whether to keep the current session active after a period of inactivity, or to force the user to login again. If the value is `true`, Snowflake keeps the session active indefinitely, even if there is no activity from the user. If the value is `false`, the user must log in again after four hours of inactivity. The default is `false`. Setting this value overrides the server session property for the current session. | +| BROWSER_RESPONSE_TIMEOUT | No | Number to seconds to wait for authentication in an external browser (default: 120). | +| DISABLERETRY | No | Set this property to `true` to prevent the driver from reconnecting automatically when the connection fails or drops. The default value is `false`. | +| AUTHENTICATOR | No | The method of authentication. Currently supports the following values:
- snowflake (default): You must also set USER and PASSWORD.
- [the URL for native SSO through Okta](https://docs.snowflake.com/en/user-guide/admin-security-fed-auth-use.html#native-sso-okta-only): You must also set USER and PASSWORD.
- [externalbrowser](https://docs.snowflake.com/en/user-guide/admin-security-fed-auth-use.html#browser-based-sso): You must also set USER.
- [snowflake_jwt](https://docs.snowflake.com/en/user-guide/key-pair-auth.html): You must also set PRIVATE_KEY_FILE or PRIVATE_KEY.
- [oauth](https://docs.snowflake.com/en/user-guide/oauth.html): You must also set TOKEN. | +| VALIDATE_DEFAULT_PARAMETERS | No | Whether DB, SCHEMA and WAREHOUSE should be verified when making connection. Default to be true. | +| PRIVATE_KEY_FILE | Depends | The path to the private key file to use for key-pair authentication. Must be used in combination with AUTHENTICATOR=snowflake_jwt | +| PRIVATE_KEY_PWD | No | The passphrase to use for decrypting the private key, if the key is encrypted. | +| PRIVATE_KEY | Depends | The private key to use for key-pair authentication. Must be used in combination with AUTHENTICATOR=snowflake_jwt.
If the private key value includes any equal signs (=), make sure to replace each equal sign with two signs (==) to ensure that the connection string is parsed correctly. | +| TOKEN | Depends | The OAuth token to use for OAuth authentication. Must be used in combination with AUTHENTICATOR=oauth. | +| INSECUREMODE | No | Set to true to disable the certificate revocation list check. Default is false. | +| USEPROXY | No | Set to true if you need to use a proxy server. The default value is false.

This parameter was introduced in v2.0.4. | +| PROXYHOST | Depends | The hostname of the proxy server.

If USEPROXY is set to `true`, you must set this parameter.

This parameter was introduced in v2.0.4. | +| PROXYPORT | Depends | The port number of the proxy server.

If USEPROXY is set to `true`, you must set this parameter.

This parameter was introduced in v2.0.4. | +| PROXYUSER | No | The username for authenticating to the proxy server.

This parameter was introduced in v2.0.4. | +| PROXYPASSWORD | Depends | The password for authenticating to the proxy server.

If USEPROXY is `true` and PROXYUSER is set, you must set this parameter.

This parameter was introduced in v2.0.4. | +| NONPROXYHOSTS | No | The list of hosts that the driver should connect to directly, bypassing the proxy server. Separate the hostnames with a pipe symbol (\|). You can also use an asterisk (`*`) as a wildcard.
The host target value should fully match with any item from the proxy host list to bypass the proxy server.

This parameter was introduced in v2.0.4. | +| FILE_TRANSFER_MEMORY_THRESHOLD | No | The maximum number of bytes to store in memory used in order to provide a file encryption. If encrypting/decrypting file size exceeds provided value a temporary file will be created and the work will be continued in the temporary file instead of memory.
If no value provided 1MB will be used as a default value (that is 1048576 bytes).
It is possible to configure any integer value bigger than zero representing maximal number of bytes to reside in memory. | +| CLIENT_CONFIG_FILE | No | The location of the client configuration json file. In this file you can configure easy logging feature. | +| ALLOWUNDERSCORESINHOST | No | Specifies whether to allow underscores in account names. This impacts PrivateLink customers whose account names contain underscores. In this situation, you must override the default value by setting allowUnderscoresInHost to true. | +| QUERY_TAG | No | Optional string that can be used to tag queries and other SQL statements executed within a connection. The tags are displayed in the output of the QUERY_HISTORY , QUERY_HISTORY_BY_* functions. | +| MAXPOOLSIZE | No | Maximum number of connections in a pool. Default value is 10. `maxPoolSize` value cannot be lower than `minPoolSize` value. | +| MINPOOLSIZE | No | Expected minimum number of connections in pool. When you get a connection from the pool, more connections might be initialised in background to increase the pool size to `minPoolSize`. If you specify 0 or 1 there will be no attempts to create extra initialisations in background. The default value is 2. `maxPoolSize` value cannot be lower than `minPoolSize` value. The parameter is used only in a new version of connection pool. | +| CHANGEDSESSION | No | Specifies what should happen with a closed connection when some of its session variables are altered (e. g. you used `ALTER SESSION SET SCHEMA` to change the databese schema). The default behaviour is `OriginalPool` which means the session stays in the original pool. Currently no other option is possible. Parameter used only in a new version of connection pool. | +| WAITINGFORIDLESESSIONTIMEOUT | No | Timeout for waiting for an idle session when pool is full. It happens when there is no idle session and we cannot create a new one because of reaching `maxPoolSize`. The default value is 30 seconds. Usage of units possible and allowed are: e. g. `1000ms` (milliseconds), `15s` (seconds), `2m` (minutes) where seconds are default for a skipped postfix. Special values: `0` - immediate fail for new connection to open when session is full. You cannot specify infinite value. | +| EXPIRATIONTIMEOUT | No | Timeout for using each connection. Connections which last more than specified timeout are considered to be expired and are being removed from the pool. The default is 1 hour. Usage of units possible and allowed are: e. g. `360000ms` (milliseconds), `3600s` (seconds), `60m` (minutes) where seconds are default for a skipped postfix. Special values: `0` - immediate expiration of the connection just after its creation. Expiration timeout cannot be set to infinity. | +| POOLINGENABLED | No | Boolean flag indicating if the connection should be a part of a pool. The default value is `true`. | + +
+ +### Password-based Authentication + +The following example demonstrates how to open a connection to Snowflake. This example uses a password for authentication. + +```cs +using (IDbConnection conn = new SnowflakeDbConnection()) +{ + conn.ConnectionString = "account=testaccount;user=testuser;password=XXXXX;db=testdb;schema=testschema"; + + conn.Open(); + + conn.Close(); +} +``` + + + +Beginning with version 2.0.18, the .NET connector uses Microsoft [DbConnectionStringBuilder](https://learn.microsoft.com/en-us/dotnet/api/system.data.oledb.oledbconnection.connectionstring?view=dotnet-plat-ext-6.0#remarks) to follow the .NET specification for escaping characters in connection strings. + +The following examples show how you can include different types of special characters in a connection string: + +- To include a single quote (') character: + + ```cs + string connectionString = String.Format( + "account=testaccount; " + + "user=testuser; " + + "password=test'password;" + ); + ``` + +- To include a double quote (") character: + + ```cs + string connectionString = String.Format( + "account=testaccount; " + + "user=testuser; " + + "password=test\"password;" + ); + ``` + +- To include a semicolon (;): + + ```cs + string connectionString = String.Format( + "account=testaccount; " + + "user=testuser; " + + "password=\"test;password\";" + ); + ``` + +- To include an equal sign (=): + + ```cs + string connectionString = String.Format( + "account=testaccount; " + + "user=testuser; " + + "password=test=password;" + ); + ``` + + Note that previously you needed to use a double equal sign (==) to escape the character. However, beginning with version 2.0.18, you can use a single equal size. + +### Other Authentication Methods + +If you are using a different method for authentication, see the examples below: + +- **Key-pair authentication** + + After setting up [key-pair authentication](https://docs.snowflake.com/en/user-guide/key-pair-auth.html), you can specify the + private key for authentication in one of the following ways: + + - Specify the file containing an unencrypted private key: + + ```cs + using (IDbConnection conn = new SnowflakeDbConnection()) + { + conn.ConnectionString = "account=testaccount;authenticator=snowflake_jwt;user=testuser;private_key_file={pathToThePrivateKeyFile};db=testdb;schema=testschema"; + + conn.Open(); + + conn.Close(); + } + ``` + + where: + + - `{pathToThePrivateKeyFile}` is the path to the file containing the unencrypted private key. + + - Specify the file containing an encrypted private key: + + ```cs + using (IDbConnection conn = new SnowflakeDbConnection()) + { + conn.ConnectionString = "account=testaccount;authenticator=snowflake_jwt;user=testuser;private_key_file={pathToThePrivateKeyFile};private_key_pwd={passwordForDecryptingThePrivateKey};db=testdb;schema=testschema"; + + conn.Open(); + + conn.Close(); + } + ``` + + where: + + - `{pathToThePrivateKeyFile}` is the path to the file containing the unencrypted private key. + - `{passwordForDecryptingThePrivateKey}` is the password for decrypting the private key. + + - Specify an unencrypted private key (read from a file): + + ```cs + using (IDbConnection conn = new SnowflakeDbConnection()) + { + string privateKeyContent = File.ReadAllText({pathToThePrivateKeyFile}); + + conn.ConnectionString = String.Format("account=testaccount;authenticator=snowflake_jwt;user=testuser;private_key={0};db=testdb;schema=testschema", privateKeyContent); + + conn.Open(); + + conn.Close(); + } + ``` + + where: + + - `{pathToThePrivateKeyFile}` is the path to the file containing the unencrypted private key. + +- **OAuth** + + After setting up [OAuth](https://docs.snowflake.com/en/user-guide/oauth.html), set `AUTHENTICATOR=oauth` and `TOKEN` to the + OAuth token in the connection string. + + ```cs + using (IDbConnection conn = new SnowflakeDbConnection()) + { + conn.ConnectionString = "account=testaccount;user=testuser;authenticator=oauth;token={oauthTokenValue};db=testdb;schema=testschema"; + + conn.Open(); + + conn.Close(); + } + ``` + + where: + + - `{oauthTokenValue}` is the oauth token to use for authentication. + +- **Browser-based SSO** + + In the connection string, set `AUTHENTICATOR=externalbrowser`. + Optionally, `USER` can be set. In that case only if user authenticated via external browser matches the one from configuration, authentication will complete. + + ```cs + using (IDbConnection conn = new SnowflakeDbConnection()) + { + conn.ConnectionString = "account=testaccount;authenticator=externalbrowser;user={login_name_for_IdP};db=testdb;schema=testschema"; + + conn.Open(); + + conn.Close(); + } + ``` + + where: + + - `{login_name_for_IdP}` is your login name for your IdP. + + You can override the default timeout after which external browser authentication is marked as failed. + The timeout prevents the infinite hang when the user does not provide the login details, e.g. when closing the browser tab. + To override, you can provide `BROWSER_RESPONSE_TIMEOUT` parameter (in seconds). + +- **Native SSO through Okta** + + In the connection string, set `AUTHENTICATOR` to the + [URL of the endpoint for your Okta account](https://docs.snowflake.com/en/user-guide/admin-security-fed-auth-use.html#label-native-sso-okta), + and set `USER` to the login name for your IdP. + + ```cs + using (IDbConnection conn = new SnowflakeDbConnection()) + { + conn.ConnectionString = "account=testaccount;authenticator={okta_url_endpoint};user={login_name_for_IdP};db=testdb;schema=testschema"; + + conn.Open(); + + conn.Close(); + } + ``` + + where: + + - `{okta_url_endpoint}` is the URL for the endpoint for your Okta account (e.g. `https://.okta.com`). + - `{login_name_for_IdP}` is your login name for your IdP. + +In v2.0.4 and later releases, you can configure the driver to connect through a proxy server. The following example configures the +driver to connect through the proxy server `myproxyserver` on port `8888`. The driver authenticates to the proxy server as the +user `test` with the password `test`: + +```cs +using (IDbConnection conn = new SnowflakeDbConnection()) +{ + conn.ConnectionString = "account=testaccount;user=testuser;password=XXXXX;db=testdb;schema=testschema;useProxy=true;proxyHost=myproxyserver;proxyPort=8888;proxyUser=test;proxyPassword=test"; + + conn.Open(); + + conn.Close(); +} +``` + +The NONPROXYHOSTS property could be set to specify if the server proxy should be bypassed by an specified host. This should be defined using the full host url or including the url + `*` wilcard symbol. + +Examples: + +- `*` (Bypassed all hosts from the proxy server) +- `*.snowflakecomputing.com` ('Bypass all host that ends with `snowflakecomputing.com`') +- `https:\\testaccount.snowflakecomputing.com` (Bypass proxy server using full host url). +- `*.myserver.com | *testaccount*` (You can specify multiple regex for the property divided by `|`) + + +> Note: The nonproxyhost value should match the full url including the http or https section. The '*' wilcard could be added to bypass the hostname successfully. + +- `myaccount.snowflakecomputing.com` (Not bypassed). +- `*myaccount.snowflakecomputing.com` (Bypassed). + diff --git a/FeaturePooling.md b/doc/ConnectionPooling.md similarity index 75% rename from FeaturePooling.md rename to doc/ConnectionPooling.md index 5dc8ca8e0..41ed2ec11 100644 --- a/FeaturePooling.md +++ b/doc/ConnectionPooling.md @@ -110,14 +110,57 @@ Assert.AreEqual(2, poolSize); #### Changed Session Behavior -When an application does a change to the connection using one of SQL commands: `use schema`, `use database`, `use warehouse`, `use role` then such -an affected connection is marked internally as no longer matching with the pool it originated from. +When an application does a change to the connection using one of SQL commands: +* `use schema`, `create schema` +* `use database`, `create database` +* `use warehouse`, `create warehouse` +* `use role`, `create role` + +then such an affected connection is marked internally as no longer matching with the pool it originated from. When parameter ChangedSession is set to `OriginalPool` it allows the connection to be pooled. Parameter ChangedSession set to `Destroy` (default) ensures that the connection is not pooled and after Close is called the connection will be removed. The pool will recreate necessary connections according to the minimal pool size. +1) ChangedSession = Destroy + +In this mode application may safely alter session properties: schema, database, warehouse, role. Connection not matching +with the connection string will not get pooled. + ```cs +var connectionString = ConnectionString + ";ChangedSession=Destroy"; +var connection = new SnowflakeDbConnection(connectionString); + +connection.Open(); +var randomSchemaName = Guid.NewGuid(); +connection.CreateCommand($"create schema \"{randomSchemaName}\").ExecuteNonQuery(); // schema gets changed +// application is running commands on a schema with random name +connection.Close(); // connection does not return to the original pool and gets destroyed; pool will reconstruct the pool + // with new connections accordingly to the MinPoolSize + +var connection2 = new SnowflakeDbConnection(connectionString); +connection2.Open(); +// operations here will be performed against schema indicated in the ConnectionString +``` + +2) ChangedSession = OriginalPool + +When application reuses connections affected by the above commands it might get to a point when using a connection +it gets errors since tables, procedures, stages do not exists cause the operations are executed using wrong +database, schema, user or role. This mode is purely for backward compatibility but is not recommended to be used. + +```cs +var connectionString = ConnectionString + ";ChangedSession=OriginalPool;MinPoolSize=1;MaxPoolSize=1"; +var connection = new SnowflakeDbConnection(connectionString); + +connection.Open(); +var randomSchemaName = Guid.NewGuid(); +connection.CreateCommand($"create schema \"{randomSchemaName}\").ExecuteNonQuery(); // schema gets changed +// application is running commands on a schema with random name +connection.Close(); // connection returns to the original pool but it's schema will no longer match with initial value +var connection2 = new SnowflakeDbConnection(connectionString); +connection2.Open(); +// operations here will be performed against schema: randomSchemaName ``` #### Pool Size Exceeded Timeout @@ -268,62 +311,3 @@ There is also a way to clear all the pools initiated by an application. ```cs SnowflakeDbConnectionPool.ClearAllPools(); ``` - -### Single Connection Pool - -DEPRECATED VERSION - -Instead of creating a connection each time your client application needs to access Snowflake, you can define a cache of Snowflake connections that can be reused as needed. -Connection pooling usually reduces the lag time to make a connection. However, it can slow down client failover to an alternative DNS when a DNS problem occurs. - -The Snowflake .NET driver provides the following functions for managing connection pools. - -| Function | Description | -|-------------------------------------------------|---------------------------------------------------------------------------------------------------------| -| SnowflakeDbConnectionPool.ClearAllPools() | Removes all connections from the connection pool. | -| SnowflakeDbConnection.SetMaxPoolSize(n) | Sets the maximum number of connections for the connection pool, where _n_ is the number of connections. | -| SnowflakeDBConnection.SetTimeout(n) | Sets the number of seconds to keep an unresponsive connection in the connection pool. | -| SnowflakeDbConnectionPool.GetCurrentPoolSize() | Returns the number of connections currently in the connection pool. | -| SnowflakeDbConnectionPool.SetPooling() | Determines whether to enable (`true`) or disable (`false`) connection pooling. Default: `true`. | - -The following sample demonstrates how to monitor the size of a connection pool as connections are added and dropped from the pool. - -```cs -public void TestConnectionPoolClean() -{ - SnowflakeDbConnectionPool.ClearAllPools(); - SnowflakeDbConnectionPool.SetMaxPoolSize(2); - var conn1 = new SnowflakeDbConnection(); - conn1.ConnectionString = ConnectionString; - conn1.Open(); - Assert.AreEqual(ConnectionState.Open, conn1.State); - - var conn2 = new SnowflakeDbConnection(); - conn2.ConnectionString = ConnectionString + " retryCount=1"; - conn2.Open(); - Assert.AreEqual(ConnectionState.Open, conn2.State); - Assert.AreEqual(0, SnowflakeDbConnectionPool.GetCurrentPoolSize()); - conn1.Close(); - conn2.Close(); - Assert.AreEqual(2, SnowflakeDbConnectionPool.GetCurrentPoolSize()); - var conn3 = new SnowflakeDbConnection(); - conn3.ConnectionString = ConnectionString + " retryCount=2"; - conn3.Open(); - Assert.AreEqual(ConnectionState.Open, conn3.State); - - var conn4 = new SnowflakeDbConnection(); - conn4.ConnectionString = ConnectionString + " retryCount=3"; - conn4.Open(); - Assert.AreEqual(ConnectionState.Open, conn4.State); - - conn3.Close(); - Assert.AreEqual(2, SnowflakeDbConnectionPool.GetCurrentPoolSize()); - conn4.Close(); - Assert.AreEqual(2, SnowflakeDbConnectionPool.GetCurrentPoolSize()); - - Assert.AreEqual(ConnectionState.Closed, conn1.State); - Assert.AreEqual(ConnectionState.Closed, conn2.State); - Assert.AreEqual(ConnectionState.Closed, conn3.State); - Assert.AreEqual(ConnectionState.Closed, conn4.State); -} -``` diff --git a/doc/ConnectionPoolingDeprecated.md b/doc/ConnectionPoolingDeprecated.md new file mode 100644 index 000000000..084246ff1 --- /dev/null +++ b/doc/ConnectionPoolingDeprecated.md @@ -0,0 +1,60 @@ +## Using Connection Pools + +### Single Connection Pool (DEPRECATED) + +DEPRECATED VERSION + +Instead of creating a connection each time your client application needs to access Snowflake, you can define a cache of Snowflake connections that can be reused as needed. +Connection pooling usually reduces the lag time to make a connection. However, it can slow down client failover to an alternative DNS when a DNS problem occurs. + +The Snowflake .NET driver provides the following functions for managing connection pools. + +| Function | Description | +|-------------------------------------------------|---------------------------------------------------------------------------------------------------------| +| SnowflakeDbConnectionPool.ClearAllPools() | Removes all connections from the connection pool. | +| SnowflakeDbConnection.SetMaxPoolSize(n) | Sets the maximum number of connections for the connection pool, where _n_ is the number of connections. | +| SnowflakeDBConnection.SetTimeout(n) | Sets the number of seconds to keep an unresponsive connection in the connection pool. | +| SnowflakeDbConnectionPool.GetCurrentPoolSize() | Returns the number of connections currently in the connection pool. | +| SnowflakeDbConnectionPool.SetPooling() | Determines whether to enable (`true`) or disable (`false`) connection pooling. Default: `true`. | + +The following sample demonstrates how to monitor the size of a connection pool as connections are added and dropped from the pool. + +```cs +public void TestConnectionPoolClean() +{ + SnowflakeDbConnectionPool.ClearAllPools(); + SnowflakeDbConnectionPool.SetMaxPoolSize(2); + var conn1 = new SnowflakeDbConnection(); + conn1.ConnectionString = ConnectionString; + conn1.Open(); + Assert.AreEqual(ConnectionState.Open, conn1.State); + + var conn2 = new SnowflakeDbConnection(); + conn2.ConnectionString = ConnectionString + " retryCount=1"; + conn2.Open(); + Assert.AreEqual(ConnectionState.Open, conn2.State); + Assert.AreEqual(0, SnowflakeDbConnectionPool.GetCurrentPoolSize()); + conn1.Close(); + conn2.Close(); + Assert.AreEqual(2, SnowflakeDbConnectionPool.GetCurrentPoolSize()); + var conn3 = new SnowflakeDbConnection(); + conn3.ConnectionString = ConnectionString + " retryCount=2"; + conn3.Open(); + Assert.AreEqual(ConnectionState.Open, conn3.State); + + var conn4 = new SnowflakeDbConnection(); + conn4.ConnectionString = ConnectionString + " retryCount=3"; + conn4.Open(); + Assert.AreEqual(ConnectionState.Open, conn4.State); + + conn3.Close(); + Assert.AreEqual(2, SnowflakeDbConnectionPool.GetCurrentPoolSize()); + conn4.Close(); + Assert.AreEqual(2, SnowflakeDbConnectionPool.GetCurrentPoolSize()); + + Assert.AreEqual(ConnectionState.Closed, conn1.State); + Assert.AreEqual(ConnectionState.Closed, conn2.State); + Assert.AreEqual(ConnectionState.Closed, conn3.State); + Assert.AreEqual(ConnectionState.Closed, conn4.State); +} +``` diff --git a/doc/DataTypes.md b/doc/DataTypes.md new file mode 100644 index 000000000..7db51cdb7 --- /dev/null +++ b/doc/DataTypes.md @@ -0,0 +1,41 @@ +## Data Types and Formats + +## Mapping .NET and Snowflake Data Types + +The .NET driver supports the following mappings from .NET to Snowflake data types. + +| .NET Framekwork Data Type | Data Type in Snowflake | +| ------------------------- | ---------------------- | +| `int`, `long` | `NUMBER(38, 0)` | +| `decimal` | `NUMBER(38, )` | +| `double` | `REAL` | +| `string` | `TEXT` | +| `bool` | `BOOLEAN` | +| `byte` | `BINARY` | +| `datetime` | `DATE` | + +## Arrow data format + +The .NET connector, starting with v2.1.3, supports the [Arrow data format](https://arrow.apache.org/) +as a [preview](https://docs.snowflake.com/en/release-notes/preview-features) feature for data transfers +between Snowflake and a .NET client. The Arrow data format avoids extra +conversions between binary and textual representations of the data. The Arrow +data format can improve performance and reduce memory consumption in clients. + +The data format is controlled by the +DOTNET_QUERY_RESULT_FORMAT parameter. To use Arrow format, execute: + +```snowflake +-- at the session level +ALTER SESSION SET DOTNET_QUERY_RESULT_FORMAT = ARROW; +-- or at the user level +ALTER USER SET DOTNET_QUERY_RESULT_FORMAT = ARROW; +-- or at the account level +ALTER ACCOUNT SET DOTNET_QUERY_RESULT_FORMAT = ARROW; +``` + +The valid values for the parameter are: + +- ARROW +- JSON (default) + diff --git a/doc/Disconnecting.md b/doc/Disconnecting.md new file mode 100644 index 000000000..b6fd5b18a --- /dev/null +++ b/doc/Disconnecting.md @@ -0,0 +1,21 @@ +## Close the Connection + +To close the connection, call the `Close` method of `SnowflakeDbConnection`. + +If you want to avoid blocking threads while the connection is closing, call the `CloseAsync` method instead, passing in a +`CancellationToken`. This method was introduced in the v2.0.4 release. + +Note that because this method is not available in the generic `IDbConnection` interface, you must cast the object as +`SnowflakeDbConnection` before calling the method. For example: + +```cs +CancellationTokenSource cancellationTokenSource = new CancellationTokenSource(); +// Close the connection +((SnowflakeDbConnection)conn).CloseAsync(cancellationTokenSource.Token); +``` + +## Evict the Connection + +For the open connection, call the `PreventPooling()` to mark the connection to be removed on close instead being still pooled. +The busy sessions counter will be decreased when the connection is closed. + diff --git a/doc/Logging.md b/doc/Logging.md new file mode 100644 index 000000000..18c235e7e --- /dev/null +++ b/doc/Logging.md @@ -0,0 +1,82 @@ +## Logging + +The Snowflake Connector for .NET uses [log4net](http://logging.apache.org/log4net/) as the logging framework. + +Here is a sample app.config file that uses [log4net](http://logging.apache.org/log4net/) + +```xml + +
+ + + + + + + + + + + + + + + + + + + + + +``` + +## Easy logging + +The Easy Logging feature lets you change the log level for all driver classes and add an extra file appender for logs from the driver's classes at runtime. You can specify the log levels and the directory in which to save log files in a configuration file (default: `sf_client_config.json`). + +You typically change log levels only when debugging your application. + +**Note** +This logging configuration file features support only the following log levels: + +- OFF +- ERROR +- WARNING +- INFO +- DEBUG +- TRACE + +This configuration file uses JSON to define the `log_level` and `log_path` logging parameters, as follows: + +```json +{ + "common": { + "log_level": "INFO", + "log_path": "c:\\some-path\\some-directory" + } +} +``` + +where: + +- `log_level` is the desired logging level. +- `log_path` is the location to store the log files. The driver automatically creates a `dotnet` subdirectory in the specified `log_path`. For example, if you set log_path to `c:\logs`, the drivers creates the `c:\logs\dotnet` directory and stores the logs there. + +The driver looks for the location of the configuration file in the following order: + +- `CLIENT_CONFIG_FILE` connection parameter, containing the full path to the configuration file (e.g. `"ACCOUNT=test;USER=test;PASSWORD=test;CLIENT_CONFIG_FILE=C:\\some-path\\client_config.json;"`) +- `SF_CLIENT_CONFIG_FILE` environment variable, containing the full path to the configuration file. +- .NET driver/application directory, where the file must be named `sf_client_config.json`. +- User’s home directory, where the file must be named `sf_client_config.json`. + +**Note** +To enhance security, the driver no longer searches a temporary directory for easy logging configurations. Additionally, the driver now requires the logging configuration file on Unix-style systems to limit file permissions to allow only the file owner to modify the files (such as `chmod 0600` or `chmod 0644`). + +To minimize the number of searches for a configuration file, the driver reads the file only for: + +- The first connection. +- The first connection with `CLIENT_CONFIG_FILE` parameter. + +The extra logs are stored in a `dotnet` subfolder of the specified directory, such as `C:\some-path\some-directory\dotnet`. + +If a client uses the `log4net` library for application logging, enabling easy logging affects the log level in those logs as well. diff --git a/doc/QueryingData.md b/doc/QueryingData.md new file mode 100644 index 000000000..cec2323bb --- /dev/null +++ b/doc/QueryingData.md @@ -0,0 +1,252 @@ +## Run a Query and Read Data + +```cs +using (IDbConnection conn = new SnowflakeDbConnection()) +{ + conn.ConnectionString = connectionString; + conn.Open(); + + IDbCommand cmd = conn.CreateCommand(); + cmd.CommandText = "select * from t"; + IDataReader reader = cmd.ExecuteReader(); + + while(reader.Read()) + { + Console.WriteLine(reader.GetString(0)); + } + + conn.Close(); +} +``` + +Note that for a `TIME` column, the reader returns a `System.DateTime` value. If you need a `System.TimeSpan` column, call the +`getTimeSpan` method in `SnowflakeDbDataReader`. This method was introduced in the v2.0.4 release. + +Note that because this method is not available in the generic `IDataReader` interface, you must cast the object as +`SnowflakeDbDataReader` before calling the method. For example: + +```cs +TimeSpan timeSpanTime = ((SnowflakeDbDataReader)reader).GetTimeSpan(13); +``` + +## Execute a query asynchronously on the server + +You can run the query asynchronously on the server. The server responds immediately with `queryId` and continues to execute the query asynchronously. +Then you can use this `queryId` to check the query status or wait until the query is completed and get the results. +It is fine to start the query in one session and continue to query for the results in another one based on the queryId. + +**Note**: There are 2 levels of asynchronous execution. One is asynchronous execution in terms of C# language (`async await`). +Another is asynchronous execution of the query by the server (you can recognize it by `InAsyncMode` containing method names, e. g. `ExecuteInAsyncMode`, `ExecuteAsyncInAsyncMode`). + +Example of synchronous code starting a query to be executed asynchronously on the server: +```cs +using (SnowflakeDbConnection conn = new SnowflakeDbConnection("account=testaccount;username=testusername;password=testpassword")) +{ + conn.Open(); + SnowflakeDbCommand cmd = (SnowflakeDbCommand)conn.CreateCommand(); + cmd.CommandText = "SELECT ..."; + var queryId = cmd.ExecuteInAsyncMode(); + // ... +} +``` + +Example of asynchronous code starting a query to be executed asynchronously on the server: +```cs +using (SnowflakeDbConnection conn = new SnowflakeDbConnection("account=testaccount;username=testusername;password=testpassword")) +{ + await conn.OpenAsync(CancellationToken.None).ConfigureAwait(false); + SnowflakeDbCommand cmd = (SnowflakeDbCommand)conn.CreateCommand()) + cmd.CommandText = "SELECT ..."; + var queryId = await cmd.ExecuteAsyncInAsyncMode(CancellationToken.None).ConfigureAwait(false); + // ... +} +``` + +You can check the status of a query executed asynchronously on the server either in synchronous code: +```cs +var queryStatus = cmd.GetQueryStatus(queryId); +Assert.IsTrue(conn.IsStillRunning(queryStatus)); // assuming that the query is still running +Assert.IsFalse(conn.IsAnError(queryStatus)); // assuming that the query has not finished with error +``` +or the same in an asynchronous code: +```cs +var queryStatus = await cmd.GetQueryStatusAsync(queryId, CancellationToken.None).ConfigureAwait(false); +Assert.IsTrue(conn.IsStillRunning(queryStatus)); // assuming that the query is still running +Assert.IsFalse(conn.IsAnError(queryStatus)); // assuming that the query has not finished with error +``` + +The following example shows how to get query results. +The operation will repeatedly check the query status until the query is completed or timeout happened or reaching the maximum number of attempts. +The synchronous code example: +```cs +DbDataReader reader = cmd.GetResultsFromQueryId(queryId); +``` +and the asynchronous code example: +```cs +DbDataReader reader = await cmd.GetResultsFromQueryIdAsync(queryId, CancellationToken.None).ConfigureAwait(false); +``` + +**Note**: GET/PUT operations are currently not enabled for asynchronous executions. + +## Executing a Batch of SQL Statements (Multi-Statement Support) + +With version 2.0.18 and later of the .NET connector, you can send +a batch of SQL statements, separated by semicolons, +to be executed in a single request. + +**Note**: Snowflake does not currently support variable binding in multi-statement SQL requests. + +--- + +**Note** + +By default, Snowflake returns an error for queries issued with multiple statements to protect against SQL injection attacks. The multiple statements feature makes your system more vulnerable to SQL injections, and so it should be used carefully. You can reduce the risk by using the MULTI_STATEMENT_COUNT parameter to specify the number of statements to be executed, which makes it more difficult to inject a statement by appending to it. + +--- + +You can execute multiple statements as a batch in the same way you execute queries with single statements, except that the query string contains multiple statements separated by semicolons. Note that multiple statements execute sequentially, not in parallel. + +You can set this parameter at the session level using the following command: + +``` +ALTER SESSION SET MULTI_STATEMENT_COUNT = <0/1>; +``` + +where: + +- **0**: Enables an unspecified number of SQL statements in a query. + + Using this value allows batch queries to contain any number of SQL statements without needing to specify the MULTI_STATEMENT_COUNT statement parameter. However, be aware that using this value reduces the protection against SQL injection attacks. + +- **1**: Allows one SQL statement or a specified number of statement in a query string (default). + + You must include MULTI_STATEMENT_COUNT as a statement parameter to specify the number of statements included when the query string contains more than one statement. If the number of statements sent in the query string does not match the MULTI_STATEMENT_COUNT value, the .NET driver rejects the request. You can, however, omit this parameter if you send a single statement. + +The following example sets the MULTI_STATEMENT_COUNT session parameter to 1. Then for an individual command, it sets MULTI_STATEMENT_COUNT=3 to indicate that the query contains precisely three SQL commands. The query string, `cmd.CommandText` , then contains the three statements to execute. + +```cs +using (IDbConnection conn = new SnowflakeDbConnection()) +{ + conn.ConnectionString = ConnectionString; + conn.Open(); + IDbCommand cmd = conn.CreateCommand(); + cmd.CommandText = "ALTER SESSION SET MULTI_STATEMENT_COUNT = 1;"; + cmd.ExecuteNonQuery(); + conn.Close(); +} + +using (DbCommand cmd = conn.CreateCommand()) +{ + // Set statement count + var stmtCountParam = cmd.CreateParameter(); + stmtCountParam.ParameterName = "MULTI_STATEMENT_COUNT"; + stmtCountParam.DbType = DbType.Int16; + stmtCountParam.Value = 3; + cmd.Parameters.Add(stmtCountParam); + cmd.CommandText = "CREATE OR REPLACE TABLE test(n int); INSERT INTO test values(1), (2); SELECT * FROM test ORDER BY n; + DbDataReader reader = cmd.ExecuteReader(); + do + { + if (reader.HasRow) + { + while (reader.Read()) + { + // read data + } + } + } + while (reader.NextResult()); +} +``` + +## Bind Parameter + +**Note**: Snowflake does not currently support variable binding in multi-statement SQL requests. + +This example shows how bound parameters are converted from C# data types to +Snowflake data types. For example, if the data type of the Snowflake column +is INTEGER, then you can bind C# data types Int32 or Int16. + +This example inserts 3 rows into a table with one column. + +```cs +using (IDbConnection conn = new SnowflakeDbConnection()) +{ + conn.ConnectionString = connectionString; + conn.Open(); + + IDbCommand cmd = conn.CreateCommand(); + cmd.CommandText = "create or replace table T(cola int)"; + int count = cmd.ExecuteNonQuery(); + Assert.AreEqual(0, count); + + IDbCommand cmd = conn.CreateCommand(); + cmd.CommandText = "insert into t values (?), (?), (?)"; + + var p1 = cmd.CreateParameter(); + p1.ParameterName = "1"; + p1.Value = 10; + p1.DbType = DbType.Int32; + cmd.Parameters.Add(p1); + + var p2 = cmd.CreateParameter(); + p2.ParameterName = "2"; + p2.Value = 10000L; + p2.DbType = DbType.Int32; + cmd.Parameters.Add(p2); + + var p3 = cmd.CreateParameter(); + p3.ParameterName = "3"; + p3.Value = (short)1; + p3.DbType = DbType.Int16; + cmd.Parameters.Add(p3); + + var count = cmd.ExecuteNonQuery(); + Assert.AreEqual(3, count); + + cmd.CommandText = "drop table if exists T"; + count = cmd.ExecuteNonQuery(); + Assert.AreEqual(0, count); + + conn.Close(); +} +``` + +## Bind Array Variables + +The sample code creates a table with a single integer column and then uses array binding to populate the table with values 0 to 70000. + +```cs +using (IDbConnection conn = new SnowflakeDbConnection()) +{ + conn.ConnectionString = ConnectionString; + conn.Open(); + + using (IDbCommand cmd = conn.CreateCommand()) + { + cmd.CommandText = "create or replace table putArrayBind(colA integer)"; + cmd.ExecuteNonQuery(); + + string insertCommand = "insert into putArrayBind values (?)"; + cmd.CommandText = insertCommand; + + int total = 70000; + + List arrint = new List(); + for (int i = 0; i < total; i++) + { + arrint.Add(i); + } + var p1 = cmd.CreateParameter(); + p1.ParameterName = "1"; + p1.DbType = DbType.Int16; + p1.Value = arrint.ToArray(); + cmd.Parameters.Add(p1); + + count = cmd.ExecuteNonQuery(); // count = 70000 + } + + conn.Close(); +} +``` + diff --git a/doc/StageFiles.md b/doc/StageFiles.md new file mode 100644 index 000000000..aa59b82b9 --- /dev/null +++ b/doc/StageFiles.md @@ -0,0 +1,64 @@ +## PUT local files to stage + +PUT command can be used to upload files of a local directory or a single local file to the Snowflake stages (named, internal table stage or internal user stage). +Such staging files can be used to load data into a table. +More on this topic: [File staging with PUT](https://docs.snowflake.com/en/sql-reference/sql/put). + +In the driver the command can be executed in a bellow way: + +```cs +using (IDbConnection conn = new SnowflakeDbConnection()) +{ + try + { + conn.ConnectionString = ""; + conn.Open(); + var cmd = (SnowflakeDbCommand)conn.CreateCommand(); // cast allows get QueryId from the command + + cmd.CommandText = "PUT file://some_data.csv @my_schema.my_stage AUTO_COMPRESS=TRUE"; + var reader = cmd.ExecuteReader(); + Assert.IsTrue(reader.read()); + Assert.DoesNotThrow(() => Guid.Parse(cmd.GetQueryId())); + } + catch (SnowflakeDbException e) + { + Assert.DoesNotThrow(() => Guid.Parse(e.QueryId)); // when failed + Assert.That(e.InnerException.GetType(), Is.EqualTo(typeof(FileNotFoundException))); + } +``` + +In case of a failure a SnowflakeDbException exception will be thrown with affected QueryId if possible. +If it was after the query got executed this exception will be a SnowflakeDbException containing affected QueryId. +In case of the initial phase of execution QueryId might not be provided. +Inner exception (if applicable) will provide some details on the failure cause and +it will be for example: FileNotFoundException, DirectoryNotFoundException. + +## GET stage files + +GET command allows to download stage directories or files to a local directory. +It can be used in connection with named stage, table internal stage or user stage. +Detailed information on the command: [Downloading files with GET](https://docs.snowflake.com/en/sql-reference/sql/get). + +To use the command in a driver similar code can be executed in a client app: + +```cs + try + { + conn.ConnectionString = ""; + conn.Open(); + var cmd = (SnowflakeDbCommand)conn.CreateCommand(); // cast allows get QueryId from the command + + cmd.CommandText = "GET @my_schema.my_stage/stage_file.csv file://local_file.csv AUTO_COMPRESS=TRUE"; + var reader = cmd.ExecuteReader(); + Assert.IsTrue(reader.read()); // True on success, False if failure + Assert.DoesNotThrow(() => Guid.Parse(cmd.GetQueryId())); + } + catch (SnowflakeDbException e) + { + Assert.DoesNotThrow(() => Guid.Parse(e.QueryId)); // on failure + } +``` + +In case of a failure a SnowflakeDbException will be thrown with affected QueryId if possible. +When no technical or syntax errors occurred but the DBDataReader has no data to process it returns False +without throwing an exception. diff --git a/doc/Testing.md b/doc/Testing.md new file mode 100644 index 000000000..70ee63f28 --- /dev/null +++ b/doc/Testing.md @@ -0,0 +1,54 @@ +# Testing the Connector + +Before running tests, create a parameters.json file under Snowflake.Data.Tests\ directory. In this file, specify username, password and account info that tests will run against. Here is a sample parameters.json file + +``` +{ + "testconnection": { + "SNOWFLAKE_TEST_USER": "snowman", + "SNOWFLAKE_TEST_PASSWORD": "XXXXXXX", + "SNOWFLAKE_TEST_ACCOUNT": "TESTACCOUNT", + "SNOWFLAKE_TEST_WAREHOUSE": "TESTWH", + "SNOWFLAKE_TEST_DATABASE": "TESTDB", + "SNOWFLAKE_TEST_SCHEMA": "TESTSCHEMA", + "SNOWFLAKE_TEST_ROLE": "TESTROLE", + "SNOWFLAKE_TEST_HOST": "testaccount.snowflakecomputing.com" + } +} +``` + +## Command Prompt + +The build solution file builds the connector and tests binaries. Issue the following command from the command line to run the tests. The test binary is located in the Debug directory if you built the solution file in Debug mode. + +```{r, engine='bash', code_block_name} +cd Snowflake.Data.Tests +dotnet test -f net6.0 -l "console;verbosity=normal" +``` + +Tests can also be run under code coverage: + +```{r, engine='bash', code_block_name} +dotnet-coverage collect "dotnet test --framework net6.0 --no-build -l console;verbosity=normal" --output net6.0_coverage.xml --output-format cobertura --settings coverage.config +``` + +You can run only specific suite of tests (integration or unit). + +Running unit tests: + +```bash +cd Snowflake.Data.Tests +dotnet test -l "console;verbosity=normal" --filter FullyQualifiedName~UnitTests -l console;verbosity=normal +``` + +Running integration tests: + +```bash +cd Snowflake.Data.Tests +dotnet test -l "console;verbosity=normal" --filter FullyQualifiedName~IntegrationTests +``` + +## Visual Studio 2017 + +Tests can also be run under Visual Studio 2017. Open the solution file in Visual Studio 2017 and run tests using Test Explorer. +