This repository has been archived by the owner on Sep 18, 2024. It is now read-only.
forked from HHS/simpler-grants-gov
-
Notifications
You must be signed in to change notification settings - Fork 0
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Browse files
Browse the repository at this point in the history
Fixes HHS#2064 Modified the search endpoint to be able to return its response as a CSV file My understanding is that the CSV download in the current experience is a frequently used feature - so adding it is worthwhile. An important detail is that all it takes to switch from getting the response as a normal JSON response body is to change the new "format" field in the request. So if the frontend added a new download button, they would just make an identical request adding this format field (they'd likely want to also adjust the page size to return more than 25 items). The actual logic is pretty simple, instead of return the normal JSON body, we instead construct a CSV file object and return that. There is some level of formatting/parsing that we need to do with this, but its pretty minor. Note that it is explicit with which fields it returns that way the CSV won't keep changing on users if we make adjustments to the schemas elsewhere. As for returning the file, it just relies on Flask itself. I'm not as familiar with file operations in an endpoint like this, so if there are scaling concerns (ie. very large output files), let me know. I know there are a few tools in Flask for streaming file responses and other complexities. If we wanted to add support for more file types like a JSON file or XML, we'd just need to add converters for those and the file logic should all work the same. I originally implemented this as JSON but realized it was just the exact response body shoved in a file - if a user wants that they might just create the file themselves from the API response. You can see what the file looks like that this produced either by running the API yourself, or looking at this one I generated. Note that for the list fields, I used `;` to separate the values within a single cell. [opportunity_search_results_20240617-152953.csv](https://github.com/user-attachments/files/15873437/opportunity_search_results_20240617-152953.csv) --------- Co-authored-by: nava-platform-bot <[email protected]>
- Loading branch information
Showing
8 changed files
with
225 additions
and
25 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,93 @@ | ||
import csv | ||
import io | ||
from typing import Sequence | ||
|
||
from src.util.dict_util import flatten_dict | ||
|
||
CSV_FIELDS = [ | ||
"opportunity_id", | ||
"opportunity_number", | ||
"opportunity_title", | ||
"opportunity_status", | ||
"agency", | ||
"category", | ||
"category_explanation", | ||
"post_date", | ||
"close_date", | ||
"close_date_description", | ||
"archive_date", | ||
"is_cost_sharing", | ||
"expected_number_of_awards", | ||
"estimated_total_program_funding", | ||
"award_floor", | ||
"award_ceiling", | ||
"additional_info_url", | ||
"additional_info_url_description", | ||
"opportunity_assistance_listings", | ||
"funding_instruments", | ||
"funding_categories", | ||
"funding_category_description", | ||
"applicant_types", | ||
"applicant_eligibility_description", | ||
"agency_code", | ||
"agency_name", | ||
"agency_phone_number", | ||
"agency_contact_description", | ||
"agency_email_address", | ||
"agency_email_address_description", | ||
"is_forecast", | ||
"forecasted_post_date", | ||
"forecasted_close_date", | ||
"forecasted_close_date_description", | ||
"forecasted_award_date", | ||
"forecasted_project_start_date", | ||
"fiscal_year", | ||
"created_at", | ||
"updated_at", | ||
# We put the description at the end as it's the longest value | ||
# which can help improve readability of other fields | ||
"summary_description", | ||
] | ||
# Same as above, but faster lookup | ||
CSV_FIELDS_SET = set(CSV_FIELDS) | ||
|
||
|
||
def _process_assistance_listing(assistance_listings: list[dict]) -> str: | ||
return ";".join( | ||
[f"{a['assistance_listing_number']}|{a['program_title']}" for a in assistance_listings] | ||
) | ||
|
||
|
||
def opportunity_to_csv(opportunities: Sequence[dict]) -> io.StringIO: | ||
opportunities_to_write: list[dict] = [] | ||
|
||
for opportunity in opportunities: | ||
opp = flatten_dict(opportunity) | ||
|
||
out_opportunity = {} | ||
for k, v in opp.items(): | ||
# Remove prefixes from nested data structures | ||
k = k.removeprefix("summary.") | ||
k = k.removeprefix("assistance_listings.") | ||
|
||
# Remove fields we haven't configured | ||
if k not in CSV_FIELDS_SET: | ||
continue | ||
|
||
if k == "opportunity_assistance_listings": | ||
v = _process_assistance_listing(v) | ||
|
||
if k in ["funding_instruments", "funding_categories", "applicant_types"]: | ||
v = ";".join(v) | ||
|
||
out_opportunity[k] = v | ||
|
||
opportunities_to_write.append(out_opportunity) | ||
|
||
output = io.StringIO() | ||
|
||
writer = csv.DictWriter(output, fieldnames=CSV_FIELDS, quoting=csv.QUOTE_ALL) | ||
writer.writeheader() | ||
writer.writerows(opportunities_to_write) | ||
|
||
return output |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Oops, something went wrong.