Skip to content

Commit

Permalink
update screenshots and minor rewording
Browse files Browse the repository at this point in the history
  • Loading branch information
tom-vx51 committed Nov 21, 2024
1 parent de41347 commit bad433a
Show file tree
Hide file tree
Showing 8 changed files with 31 additions and 15 deletions.
Binary file modified docs/source/images/teams/data_lens_import_dialog.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified docs/source/images/teams/data_lens_import_options.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified docs/source/images/teams/data_lens_imported_samples.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified docs/source/images/teams/data_lens_preview.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified docs/source/images/teams/data_lens_query.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified docs/source/images/teams/data_lens_synthetic_query.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified docs/source/images/teams/data_lens_synthetic_text.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
46 changes: 31 additions & 15 deletions docs/source/teams/data_lens.rst
Original file line number Diff line number Diff line change
Expand Up @@ -142,9 +142,17 @@ button to open the import dialog.
:align: center

Imports can be limited to a specific number of samples, or you can import all
samples matching your query parameters. You can also choose whether to add
new samples for all examples from your data lake, or whether to skip existing
samples with the same `filepath`.
samples matching your query parameters.

The "Skip existing samples" checkbox allows you to configure the behavior for
merging samples into a dataset. If checked, samples with a `filepath` which is
already present in the dataset will be skipped. If left unchecked, all samples
will be added to the dataset.

.. note::

If you elect to skip existing samples, this will also deduplicate samples
within the data being imported.

After configuring the size/behavior of your import, select a destination
dataset for the samples. This can be an existing dataset, or you can choose to
Expand All @@ -156,6 +164,10 @@ sample.
When you click import, you will have the option to either execute immediately
or to schedule this import for asynchronous execution.

.. image:: /images/teams/data_lens_import_options.png
:alt: data-lens-import-options
:align: center

If you are importing a small number of samples, then immediate execution may
be appropriate. However, for most cases it is recommended to schedule the
import, as this will result in more consistent and performant execution.
Expand All @@ -166,10 +178,6 @@ import, as this will result in more consistent and performant execution.
:ref:`delegated operations <teams-delegated-operations>` framework to
execute asynchronously on your connected compute cluster!

.. image:: /images/teams/data_lens_import_options.png
:alt: data-lens-import-options
:align: center

After selecting your execution preference, you will be able to monitor the
status of your import through the information provided by the import panel.

Expand Down Expand Up @@ -480,7 +488,7 @@ method.
inputs = types.Object()
# Add a string field named "sample_text"
inputs.str("sample_text", label="Sample text")
inputs.str("sample_text", label="Sample text", description="Text to render in samples")
return types.Property(inputs)
Expand Down Expand Up @@ -514,10 +522,10 @@ logic to integrate `sample_text` into our operator.
samples = []
# Create a sample for each character in our input text
for i in range(len(sample_text)):
for char in sample_text:
samples.append(
fo.Sample(
filepath=f"https://placehold.co/150x150?text={sample_text[i]}"
filepath=f"https://placehold.co/150x150?text={char}"
).to_dict()
)
Expand Down Expand Up @@ -621,7 +629,12 @@ Here's a fully-functional Data Lens connector for BigQuery:
inputs = types.Object()
# We'll enable searching on detection labels
inputs.str("detection_label", label="Detection label", required=True)
inputs.str(
"detection_label",
label="Detection label",
description="Enter a label to find samples with a matching detection",
required=True,
)
return types.Property(inputs)
Expand All @@ -631,16 +644,19 @@ Here's a fully-functional Data Lens connector for BigQuery:
ctx: foo.ExecutionContext,
) -> Generator[DataLensSearchResponse, None, None]:
handler = BigQueryHandler()
for batch in handler.handle_request(request):
for batch in handler.handle_request(request, ctx):
yield batch
class BigQueryHandler:
def handle_request(
self,
request: DataLensSearchRequest
request: DataLensSearchRequest,
ctx: foo.ExecutionContext,
) -> Generator[DataLensSearchResponse, None, None]:
# Create our client
# Create our client.
# If needed, we can use secrets from `ctx.secrets` to provide credentials
# or other secure configuration required to interact with our data source.
client = bigquery.Client()
try:
Expand Down Expand Up @@ -717,7 +733,7 @@ Let's take a look at a few parts in detail.
client = bigquery.Client()
In practice, you'll likely need to use :ref:`secrets <teams-secrets>` to
securely provide credentials to connect to your BigQuery.
securely provide credentials to connect to your data source.

.. code-block:: python
:linenos:
Expand Down

0 comments on commit bad433a

Please sign in to comment.