diff --git a/docs/_freeze/posts/campaign-finance/index/execute-results/html.json b/docs/_freeze/posts/campaign-finance/index/execute-results/html.json index 4da314f908f0..4f436cb38c6f 100644 --- a/docs/_freeze/posts/campaign-finance/index/execute-results/html.json +++ b/docs/_freeze/posts/campaign-finance/index/execute-results/html.json @@ -1,14 +1,15 @@ { - "hash": "2631514785c59e4e1d3b37b9c07ea232", + "hash": "989ed0f2ebddb8e202db6a33bc1bf790", "result": { - "markdown": "---\ntitle: \"Exploring campaign finance data\"\nauthor: \"Nick Crews\"\ndate: \"2023-03-24\"\ncategories:\n - blog\n - data engineering\n - case study\n - duckdb\n - performance\n---\n\nHi! My name is [Nick Crews](https://www.linkedin.com/in/nicholas-b-crews/),\nand I'm a data engineer that looks at public campaign finance data.\n\nIn this post, I'll walk through how I use Ibis to explore public campaign contribution\ndata from the Federal Election Commission (FEC). We'll do some loading,\ncleaning, featurizing, and visualization. There will be filtering, sorting, grouping,\nand aggregation.\n\n## Downloading The Data\n\n::: {#02d63441 .cell execution_count=1}\n``` {.python .cell-code}\nfrom pathlib import Path\nfrom zipfile import ZipFile\nfrom urllib.request import urlretrieve\n\n# Download and unzip the 2018 individual contributions data\nurl = \"https://cg-519a459a-0ea3-42c2-b7bc-fa1143481f74.s3-us-gov-west-1.amazonaws.com/bulk-downloads/2018/indiv18.zip\"\nzip_path = Path(\"indiv18.zip\")\ncsv_path = Path(\"indiv18.csv\")\n\nif not zip_path.exists():\n urlretrieve(url, zip_path)\n\nif not csv_path.exists():\n with ZipFile(zip_path) as zip_file, csv_path.open(\"w\") as csv_file:\n for line in zip_file.open(\"itcont.txt\"):\n csv_file.write(line.decode())\n```\n:::\n\n\n## Loading the data\n\nNow that we have our raw data in a .csv format, let's load it into Ibis,\nusing the duckdb backend.\n\nNote that a 4.3 GB .csv would be near the limit of what pandas could\nhandle on my laptop with 16GB of RAM. In pandas, typically every time\nyou perform a transformation on the data, a copy of the data is made.\nI could only do a few transformations before I ran out of memory.\n\nWith Ibis, this problem is solved in two different ways.\n\nFirst, because they are designed to work with very large datasets,\nmany (all?) SQL backends support out of core operations.\nThe data lives on disk, and are only loaded in a streaming fashion\nwhen needed, and then written back to disk as the operation is performed.\n\nSecond, unless you explicitly ask for it, Ibis makes use of lazy\nevaluation. This means that when you ask for a result, the\nresult is not persisted in memory. Only the original source\ndata is persisted. Everything else is derived from this on the fly.\n\n::: {#83a871f2 .cell execution_count=2}\n``` {.python .cell-code}\nimport ibis\nfrom ibis import _\n\nibis.options.interactive = True\n\n# The raw .csv file doesn't have column names, so we will add them in the next step.\nraw = ibis.read_csv(csv_path)\nraw\n```\n\n::: {.cell-output .cell-output-display execution_count=2}\n```{=html}\n
┏━━━━━━━━━━━┳━━━━━━━━┳━━━━━━━━┳━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━┳━━━━━━━━┳━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━┳━━━━━━━━┳━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━┳━━━━━━━┳━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━┳━━━━━━━━━┳━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━┓\n┃ C00401224 ┃ A ┃ M6 ┃ P ┃ 201804059101866001 ┃ 24T ┃ IND ┃ STOUFFER, LEIGH ┃ AMSTELVEEN ┃ ZZ ┃ 1187RC ┃ MYSELF ┃ SELF EMPLOYED ┃ 05172017 ┃ 10 ┃ C00458000 ┃ SA11AI_81445687 ┃ 1217152 ┃ column18 ┃ EARMARKED FOR PROGRESSIVE CHANGE CAMPAIGN COMMITTEE (C00458000) ┃ 4050820181544765358 ┃\n┡━━━━━━━━━━━╇━━━━━━━━╇━━━━━━━━╇━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━╇━━━━━━━━╇━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━╇━━━━━━━━╇━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━╇━━━━━━━╇━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━╇━━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━┩\n│ string │ string │ string │ string │ int64 │ string │ string │ string │ string │ string │ string │ string │ string │ string │ int64 │ string │ string │ int64 │ string │ string │ int64 │\n├───────────┼────────┼────────┼────────┼────────────────────┼────────┼────────┼───────────────────┼──────────────┼────────┼───────────┼───────────────────┼─────────────────────────┼──────────┼───────┼───────────┼─────────────────┼─────────┼──────────┼─────────────────────────────────────────────────────────────────┼─────────────────────┤\n│ C00401224 │ A │ M6 │ P │ 201804059101867748 │ 24T │ IND │ STRAWS, JOYCE │ OCOEE │ FL │ 34761 │ SILVERSEA CRUISES │ RESERVATIONS SUPERVISOR │ 05182017 │ 10 │ C00000935 │ SA11AI_81592336 │ 1217152 │ NULL │ EARMARKED FOR DCCC (C00000935) │ 4050820181544770597 │\n│ C00401224 │ A │ M6 │ P │ 201804059101867748 │ 24T │ IND │ STRAWS, JOYCE │ OCOEE │ FL │ 34761 │ SILVERSEA CRUISES │ RESERVATIONS SUPERVISOR │ 05192017 │ 15 │ C00000935 │ SA11AI_81627562 │ 1217152 │ NULL │ EARMARKED FOR DCCC (C00000935) │ 4050820181544770598 │\n│ C00401224 │ A │ M6 │ P │ 201804059101865942 │ 24T │ IND │ STOTT, JIM │ CAPE NEDDICK │ ME │ 039020760 │ NONE │ NONE │ 05132017 │ 35 │ C00000935 │ SA11AI_81047921 │ 1217152 │ NULL │ EARMARKED FOR DCCC (C00000935) │ 4050820181544765179 │\n│ C00401224 │ A │ M6 │ P │ 201804059101865942 │ 24T │ IND │ STOTT, JIM │ CAPE NEDDICK │ ME │ 039020760 │ NONE │ NONE │ 05152017 │ 35 │ C00000935 │ SA11AI_81209209 │ 1217152 │ NULL │ EARMARKED FOR DCCC (C00000935) │ 4050820181544765180 │\n│ C00401224 │ A │ M6 │ P │ 201804059101865942 │ 24T │ IND │ STOTT, JIM │ CAPE NEDDICK │ ME │ 039020760 │ NONE │ NONE │ 05192017 │ 5 │ C00000935 │ SA11AI_81605223 │ 1217152 │ NULL │ EARMARKED FOR DCCC (C00000935) │ 4050820181544765181 │\n│ C00401224 │ A │ M6 │ P │ 201804059101865943 │ 24T │ IND │ STOTT, JIM │ CAPE NEDDICK │ ME │ 039020760 │ NONE │ NONE │ 05242017 │ 15 │ C00000935 │ SA11AI_82200022 │ 1217152 │ NULL │ EARMARKED FOR DCCC (C00000935) │ 4050820181544765182 │\n│ C00401224 │ A │ M6 │ P │ 201804059101865943 │ 24T │ IND │ STOTT, JIM │ CAPE NEDDICK │ ME │ 03902 │ NOT EMPLOYED │ NOT EMPLOYED │ 05292017 │ 100 │ C00213512 │ SA11AI_82589834 │ 1217152 │ NULL │ EARMARKED FOR NANCY PELOSI FOR CONGRESS (C00213512) │ 4050820181544765184 │\n│ C00401224 │ A │ M6 │ P │ 201804059101865944 │ 24T │ IND │ STOTT, JIM │ CAPE NEDDICK │ ME │ 039020760 │ NONE │ NONE │ 05302017 │ 35 │ C00000935 │ SA11AI_82643727 │ 1217152 │ NULL │ EARMARKED FOR DCCC (C00000935) │ 4050820181544765185 │\n│ C00401224 │ A │ M6 │ P │ 201804059101867050 │ 24T │ IND │ STRANGE, WINIFRED │ ANNA MSRIA │ FL │ 34216 │ NOT EMPLOYED │ NOT EMPLOYED │ 05162017 │ 25 │ C00000935 │ SA11AI_81325918 │ 1217152 │ NULL │ EARMARKED FOR DCCC (C00000935) │ 4050820181544768505 │\n│ C00401224 │ A │ M6 │ P │ 201804059101867051 │ 24T │ IND │ STRANGE, WINIFRED │ ANNA MSRIA │ FL │ 34216 │ NOT EMPLOYED │ NOT EMPLOYED │ 05232017 │ 25 │ C00000935 │ SA11AI_81991189 │ 1217152 │ NULL │ EARMARKED FOR DCCC (C00000935) │ 4050820181544768506 │\n│ … │ … │ … │ … │ … │ … │ … │ … │ … │ … │ … │ … │ … │ … │ … │ … │ … │ … │ … │ … │ … │\n└───────────┴────────┴────────┴────────┴────────────────────┴────────┴────────┴───────────────────┴──────────────┴────────┴───────────┴───────────────────┴─────────────────────────┴──────────┴───────┴───────────┴─────────────────┴─────────┴──────────┴─────────────────────────────────────────────────────────────────┴─────────────────────┘\n\n```\n:::\n:::\n\n\n::: {#d2a81789 .cell execution_count=3}\n``` {.python .cell-code}\n# For a more comprehesive description of the columns and their meaning, see\n# https://www.fec.gov/campaign-finance-data/contributions-individuals-file-description/\ncolumns = {\n \"CMTE_ID\": \"keep\", # Committee ID\n \"AMNDT_IND\": \"drop\", # Amendment indicator. A = amendment, N = new, T = termination\n \"RPT_TP\": \"drop\", # Report type (monthly, quarterly, etc)\n \"TRANSACTION_PGI\": \"keep\", # Primary/general indicator\n \"IMAGE_NUM\": \"drop\", # Image number\n \"TRANSACTION_TP\": \"drop\", # Transaction type\n \"ENTITY_TP\": \"keep\", # Entity type\n \"NAME\": \"drop\", # Contributor name\n \"CITY\": \"keep\", # Contributor city\n \"STATE\": \"keep\", # Contributor state\n \"ZIP_CODE\": \"drop\", # Contributor zip code\n \"EMPLOYER\": \"drop\", # Contributor employer\n \"OCCUPATION\": \"drop\", # Contributor occupation\n \"TRANSACTION_DT\": \"keep\", # Transaction date\n \"TRANSACTION_AMT\": \"keep\", # Transaction amount\n # Other ID. For individual contributions will be null. For contributions from\n # other FEC committees, will be the committee ID of the other committee.\n \"OTHER_ID\": \"drop\",\n \"TRAN_ID\": \"drop\", # Transaction ID\n \"FILE_NUM\": \"drop\", # File number, unique number assigned to each report filed with the FEC\n \"MEMO_CD\": \"drop\", # Memo code\n \"MEMO_TEXT\": \"drop\", # Memo text\n \"SUB_ID\": \"drop\", # Submission ID. Unique number assigned to each transaction.\n}\n\nrenaming = {old: new for old, new in zip(raw.columns, columns.keys())}\nto_keep = [k for k, v in columns.items() if v == \"keep\"]\nkept = raw.relabel(renaming)[to_keep]\nkept\n```\n\n::: {.cell-output .cell-output-display execution_count=3}\n```{=html}\n
┏━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━┳━━━━━━━━━━━━━━┳━━━━━━━━┳━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━┓\n┃ CMTE_ID ┃ TRANSACTION_PGI ┃ ENTITY_TP ┃ CITY ┃ STATE ┃ TRANSACTION_DT ┃ TRANSACTION_AMT ┃\n┡━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━╇━━━━━━━━━━━━━━╇━━━━━━━━╇━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━┩\n│ string │ string │ string │ string │ string │ string │ int64 │\n├───────────┼─────────────────┼───────────┼──────────────┼────────┼────────────────┼─────────────────┤\n│ C00401224 │ P │ IND │ OCOEE │ FL │ 05182017 │ 10 │\n│ C00401224 │ P │ IND │ OCOEE │ FL │ 05192017 │ 15 │\n│ C00401224 │ P │ IND │ CAPE NEDDICK │ ME │ 05132017 │ 35 │\n│ C00401224 │ P │ IND │ CAPE NEDDICK │ ME │ 05152017 │ 35 │\n│ C00401224 │ P │ IND │ CAPE NEDDICK │ ME │ 05192017 │ 5 │\n│ C00401224 │ P │ IND │ CAPE NEDDICK │ ME │ 05242017 │ 15 │\n│ C00401224 │ P │ IND │ CAPE NEDDICK │ ME │ 05292017 │ 100 │\n│ C00401224 │ P │ IND │ CAPE NEDDICK │ ME │ 05302017 │ 35 │\n│ C00401224 │ P │ IND │ ANNA MSRIA │ FL │ 05162017 │ 25 │\n│ C00401224 │ P │ IND │ ANNA MSRIA │ FL │ 05232017 │ 25 │\n│ … │ … │ … │ … │ … │ … │ … │\n└───────────┴─────────────────┴───────────┴──────────────┴────────┴────────────────┴─────────────────┘\n\n```\n:::\n:::\n\n\n::: {#1e6d16fe .cell execution_count=4}\n``` {.python .cell-code}\n# 21 million rows\nkept.count()\n```\n\n::: {.cell-output .cell-output-display}\n```{=html}\n\n```\n:::\n\n::: {.cell-output .cell-output-display execution_count=4}\n\n::: {.ansi-escaped-output}\n```{=html}\n
21730730
\n```\n:::\n\n:::\n:::\n\n\nHuh, what's up with those timings? Previewing the head only took a fraction of a second,\nbut finding the number of rows took 10 seconds.\n\nThat's because duckdb is scanning the .csv file on the fly every time we access it.\nSo we only have to read the first few lines to get that preview,\nbut we have to read the whole file to get the number of rows.\n\nNote that this isn't a feature of Ibis, but a feature of Duckdb. This what I think is\none of the strengths of Ibis: Ibis itself doesn't have to implement any of the\noptimimizations or features of the backends. Those backends can focus on what they do\nbest, and Ibis can get those things for free.\n\nSo, let's tell duckdb to actually read in the file to its native format so later accesses\nwill be faster. This will be a ~20 seconds that we'll only have to pay once.\n\n::: {#185a2d89 .cell execution_count=5}\n``` {.python .cell-code}\nkept = kept.cache()\nkept\n```\n\n::: {.cell-output .cell-output-display execution_count=5}\n```{=html}\n┏━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━┳━━━━━━━━━━━━━━┳━━━━━━━━┳━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━┓\n┃ CMTE_ID ┃ TRANSACTION_PGI ┃ ENTITY_TP ┃ CITY ┃ STATE ┃ TRANSACTION_DT ┃ TRANSACTION_AMT ┃\n┡━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━╇━━━━━━━━━━━━━━╇━━━━━━━━╇━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━┩\n│ string │ string │ string │ string │ string │ string │ int64 │\n├───────────┼─────────────────┼───────────┼──────────────┼────────┼────────────────┼─────────────────┤\n│ C00401224 │ P │ IND │ OCOEE │ FL │ 05182017 │ 10 │\n│ C00401224 │ P │ IND │ OCOEE │ FL │ 05192017 │ 15 │\n│ C00401224 │ P │ IND │ CAPE NEDDICK │ ME │ 05132017 │ 35 │\n│ C00401224 │ P │ IND │ CAPE NEDDICK │ ME │ 05152017 │ 35 │\n│ C00401224 │ P │ IND │ CAPE NEDDICK │ ME │ 05192017 │ 5 │\n│ C00401224 │ P │ IND │ CAPE NEDDICK │ ME │ 05242017 │ 15 │\n│ C00401224 │ P │ IND │ CAPE NEDDICK │ ME │ 05292017 │ 100 │\n│ C00401224 │ P │ IND │ CAPE NEDDICK │ ME │ 05302017 │ 35 │\n│ C00401224 │ P │ IND │ ANNA MSRIA │ FL │ 05162017 │ 25 │\n│ C00401224 │ P │ IND │ ANNA MSRIA │ FL │ 05232017 │ 25 │\n│ … │ … │ … │ … │ … │ … │ … │\n└───────────┴─────────────────┴───────────┴──────────────┴────────┴────────────────┴─────────────────┘\n\n```\n:::\n:::\n\n\nLook, now accessing it only takes a fraction of a second!\n\n::: {#9253e73f .cell execution_count=6}\n``` {.python .cell-code}\nkept.count()\n```\n\n::: {.cell-output .cell-output-display}\n```{=html}\n\n```\n:::\n\n::: {.cell-output .cell-output-display execution_count=6}\n\n::: {.ansi-escaped-output}\n```{=html}\n
21730730
\n```\n:::\n\n:::\n:::\n\n\n### Committees Data\n\nThe contributions only list an opaque `CMTE_ID` column. We want to know which actual\ncommittee this is. Let's load the committees table so we can lookup from\ncommittee ID to committee name.\n\n::: {#30076e2c .cell execution_count=7}\n``` {.python .cell-code}\ndef read_committees():\n committees_url = \"https://cg-519a459a-0ea3-42c2-b7bc-fa1143481f74.s3-us-gov-west-1.amazonaws.com/bulk-downloads/2018/committee_summary_2018.csv\"\n # This just creates a view, it doesn't actually fetch the data yet\n tmp = ibis.read_csv(committees_url)\n tmp = tmp[\"CMTE_ID\", \"CMTE_NM\"]\n # The raw table contains multiple rows for each committee id, so lets pick\n # an arbitrary row for each committee id as the representative name.\n deduped = tmp.group_by(\"CMTE_ID\").agg(CMTE_NM=_.CMTE_NM.arbitrary())\n return deduped\n\n\ncomms = read_committees().cache()\ncomms\n```\n\n::: {.cell-output .cell-output-display execution_count=7}\n```{=html}\n┏━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓\n┃ CMTE_ID ┃ CMTE_NM ┃\n┡━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┩\n│ string │ string │\n├───────────┼────────────────────────────────────────────────────────────────┤\n│ C00659441 │ JASON ORTITAY FOR CONGRESS │\n│ C00661249 │ SERVICE AFTER SERVICE │\n│ C00457754 │ U.S. TRAVEL ASSOCIATION PAC │\n│ C00577635 │ ISAKSON VICTORY COMMITTEE │\n│ C00297911 │ TEXAS FORESTRY ASSOCIATION FORESTRY POLITICAL ACTION COMMITTEE │\n│ C00551382 │ VOTECLIMATE.US PAC │\n│ C00414318 │ LOEBSACK FOR CONGRESS │\n│ C00610709 │ AUSTIN INNOVATION 2016 │\n│ C00131607 │ FLORIDA CITRUS MUTUAL POLITCAL ACTION COMMITTEE │\n│ C00136531 │ NATIONAL DEMOCRATIC POLICY COMMITTEE │\n│ … │ … │\n└───────────┴────────────────────────────────────────────────────────────────┘\n\n```\n:::\n:::\n\n\nNow add the committee name to the contributions table:\n\n::: {#0a9f3b35 .cell execution_count=8}\n``` {.python .cell-code}\ntogether = kept.left_join(comms, \"CMTE_ID\").drop(\"CMTE_ID\", \"CMTE_ID_right\")\ntogether\n```\n\n::: {.cell-output .cell-output-display execution_count=8}\n```{=html}\n
┏━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━┳━━━━━━━━┳━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓\n┃ TRANSACTION_PGI ┃ ENTITY_TP ┃ CITY ┃ STATE ┃ TRANSACTION_DT ┃ TRANSACTION_AMT ┃ CMTE_NM ┃\n┡━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━╇━━━━━━━━╇━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┩\n│ string │ string │ string │ string │ string │ int64 │ string │\n├─────────────────┼───────────┼──────────────────┼────────┼────────────────┼─────────────────┼─────────────────────────────────────────────────┤\n│ P │ IND │ COHASSET │ MA │ 01312017 │ 230 │ UNUM GROUP POLITICAL ACTION COMMITTEE (UNUMPAC) │\n│ P │ IND │ KEY LARGO │ FL │ 01042017 │ 5000 │ UNUM GROUP POLITICAL ACTION COMMITTEE (UNUMPAC) │\n│ P │ IND │ LOOKOUT MOUNTAIN │ GA │ 01312017 │ 230 │ UNUM GROUP POLITICAL ACTION COMMITTEE (UNUMPAC) │\n│ P │ IND │ NORTH YARMOUTH │ ME │ 01312017 │ 384 │ UNUM GROUP POLITICAL ACTION COMMITTEE (UNUMPAC) │\n│ P │ IND │ ALPHARETTA │ GA │ 01312017 │ 384 │ UNUM GROUP POLITICAL ACTION COMMITTEE (UNUMPAC) │\n│ P │ IND │ FALMOUTH │ ME │ 01312017 │ 384 │ UNUM GROUP POLITICAL ACTION COMMITTEE (UNUMPAC) │\n│ P │ IND │ FALMOUTH │ ME │ 01312017 │ 384 │ UNUM GROUP POLITICAL ACTION COMMITTEE (UNUMPAC) │\n│ P │ IND │ HOLLIS CENTER │ ME │ 01312017 │ 384 │ UNUM GROUP POLITICAL ACTION COMMITTEE (UNUMPAC) │\n│ P │ IND │ FALMOUTH │ ME │ 01312017 │ 384 │ UNUM GROUP POLITICAL ACTION COMMITTEE (UNUMPAC) │\n│ P │ IND │ ALEXANDRIA │ VA │ 01312017 │ 384 │ UNUM GROUP POLITICAL ACTION COMMITTEE (UNUMPAC) │\n│ … │ … │ … │ … │ … │ … │ … │\n└─────────────────┴───────────┴──────────────────┴────────┴────────────────┴─────────────────┴─────────────────────────────────────────────────┘\n\n```\n:::\n:::\n\n\n## Cleaning\n\nFirst, let's drop any contributions that don't have a committee name. There are only 6 of them.\n\n::: {#14ae871f .cell execution_count=9}\n``` {.python .cell-code}\n# We can do this fearlessly, no .copy() needed, because\n# everything in Ibis is immutable. If we did this in pandas,\n# we might start modifying the original DataFrame accidentally!\ncleaned = together\n\nhas_name = cleaned.CMTE_NM.notnull()\ncleaned = cleaned[has_name]\nhas_name.value_counts()\n```\n\n::: {.cell-output .cell-output-display execution_count=9}\n```{=html}\n
┏━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━┓\n┃ NotNull(CMTE_NM) ┃ NotNull(CMTE_NM)_count ┃\n┡━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━┩\n│ boolean │ int64 │\n├──────────────────┼────────────────────────┤\n│ True │ 21730724 │\n│ False │ 6 │\n└──────────────────┴────────────────────────┘\n\n```\n:::\n:::\n\n\nLet's look at the `ENTITY_TP` column. This represents the type of entity that\nmade the contribution:\n\n::: {#72577ed8 .cell execution_count=10}\n``` {.python .cell-code}\ntogether.ENTITY_TP.value_counts()\n```\n\n::: {.cell-output .cell-output-display execution_count=10}\n```{=html}\n
┏━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━┓\n┃ ENTITY_TP ┃ ENTITY_TP_count ┃\n┡━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━┩\n│ string │ int64 │\n├───────────┼─────────────────┤\n│ IND │ 21687992 │\n│ CCM │ 698 │\n│ CAN │ 13659 │\n│ ORG │ 18555 │\n│ PTY │ 49 │\n│ COM │ 867 │\n│ PAC │ 3621 │\n│ NULL │ 5289 │\n└───────────┴─────────────────┘\n\n```\n:::\n:::\n\n\nWe only care about contributions from individuals.\n\nOnce we filter on this column, the contents of it are irrelevant, so let's drop it.\n\n::: {#f29924a2 .cell execution_count=11}\n``` {.python .cell-code}\ncleaned = together[_.ENTITY_TP == \"IND\"].drop(\"ENTITY_TP\")\n```\n:::\n\n\nIt looks like the `TRANSACTION_DT` column was a raw string like \"MMDDYYYY\",\nso let's convert that to a proper date type.\n\n::: {#15443483 .cell execution_count=12}\n``` {.python .cell-code}\nfrom ibis.expr.types import StringValue, DateValue\n\n\ndef mmddyyyy_to_date(val: StringValue) -> DateValue:\n return val.cast(str).lpad(8, \"0\").to_timestamp(\"%m%d%Y\").date()\n\n\ncleaned = cleaned.mutate(date=mmddyyyy_to_date(_.TRANSACTION_DT)).drop(\"TRANSACTION_DT\")\ncleaned\n```\n\n::: {.cell-output .cell-output-display execution_count=12}\n```{=html}\n
┏━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━┳━━━━━━━━┳━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━┓\n┃ TRANSACTION_PGI ┃ CITY ┃ STATE ┃ TRANSACTION_AMT ┃ CMTE_NM ┃ date ┃\n┡━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━╇━━━━━━━━╇━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━┩\n│ string │ string │ string │ int64 │ string │ date │\n├─────────────────┼──────────────────┼────────┼─────────────────┼─────────────────────────────────────────────────┼────────────┤\n│ P │ COHASSET │ MA │ 230 │ UNUM GROUP POLITICAL ACTION COMMITTEE (UNUMPAC) │ 2017-01-31 │\n│ P │ KEY LARGO │ FL │ 5000 │ UNUM GROUP POLITICAL ACTION COMMITTEE (UNUMPAC) │ 2017-01-04 │\n│ P │ LOOKOUT MOUNTAIN │ GA │ 230 │ UNUM GROUP POLITICAL ACTION COMMITTEE (UNUMPAC) │ 2017-01-31 │\n│ P │ NORTH YARMOUTH │ ME │ 384 │ UNUM GROUP POLITICAL ACTION COMMITTEE (UNUMPAC) │ 2017-01-31 │\n│ P │ ALPHARETTA │ GA │ 384 │ UNUM GROUP POLITICAL ACTION COMMITTEE (UNUMPAC) │ 2017-01-31 │\n│ P │ FALMOUTH │ ME │ 384 │ UNUM GROUP POLITICAL ACTION COMMITTEE (UNUMPAC) │ 2017-01-31 │\n│ P │ FALMOUTH │ ME │ 384 │ UNUM GROUP POLITICAL ACTION COMMITTEE (UNUMPAC) │ 2017-01-31 │\n│ P │ HOLLIS CENTER │ ME │ 384 │ UNUM GROUP POLITICAL ACTION COMMITTEE (UNUMPAC) │ 2017-01-31 │\n│ P │ FALMOUTH │ ME │ 384 │ UNUM GROUP POLITICAL ACTION COMMITTEE (UNUMPAC) │ 2017-01-31 │\n│ P │ ALEXANDRIA │ VA │ 384 │ UNUM GROUP POLITICAL ACTION COMMITTEE (UNUMPAC) │ 2017-01-31 │\n│ … │ … │ … │ … │ … │ … │\n└─────────────────┴──────────────────┴────────┴─────────────────┴─────────────────────────────────────────────────┴────────────┘\n\n```\n:::\n:::\n\n\nThe `TRANSACTION_PGI` column represents the type (primary, general, etc) of election,\nand the year. But it seems to be not very consistent:\n\n::: {#fa016097 .cell execution_count=13}\n``` {.python .cell-code}\ncleaned.TRANSACTION_PGI.topk(10)\n```\n\n::: {.cell-output .cell-output-display execution_count=13}\n```{=html}\n
┏━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━┓\n┃ TRANSACTION_PGI ┃ Count(TRANSACTION_PGI) ┃\n┡━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━┩\n│ string │ int64 │\n├─────────────────┼────────────────────────┤\n│ P │ 17013596 │\n│ G2018 │ 2095123 │\n│ P2018 │ 1677183 │\n│ P2020 │ 208501 │\n│ O2018 │ 161874 │\n│ S2017 │ 124336 │\n│ G2017 │ 98401 │\n│ P2022 │ 91136 │\n│ P2017 │ 61153 │\n│ R2017 │ 54281 │\n└─────────────────┴────────────────────────┘\n\n```\n:::\n:::\n\n\n::: {#35c8a393 .cell execution_count=14}\n``` {.python .cell-code}\ndef get_election_type(pgi: StringValue) -> StringValue:\n \"\"\"Use the first letter of the TRANSACTION_PGI column to determine the election type\n\n If the first letter is not one of the known election stage, then return null.\n \"\"\"\n election_types = {\n \"P\": \"primary\",\n \"G\": \"general\",\n \"O\": \"other\",\n \"C\": \"convention\",\n \"R\": \"runoff\",\n \"S\": \"special\",\n \"E\": \"recount\",\n }\n first_letter = pgi[0]\n return first_letter.substitute(election_types, else_=ibis.NA)\n\n\ncleaned = cleaned.mutate(election_type=get_election_type(_.TRANSACTION_PGI)).drop(\n \"TRANSACTION_PGI\"\n)\ncleaned\n```\n\n::: {.cell-output .cell-output-display execution_count=14}\n```{=html}\n
┏━━━━━━━━━━━━━━━━━━┳━━━━━━━━┳━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━┳━━━━━━━━━━━━━━━┓\n┃ CITY ┃ STATE ┃ TRANSACTION_AMT ┃ CMTE_NM ┃ date ┃ election_type ┃\n┡━━━━━━━━━━━━━━━━━━╇━━━━━━━━╇━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━╇━━━━━━━━━━━━━━━┩\n│ string │ string │ int64 │ string │ date │ string │\n├──────────────────┼────────┼─────────────────┼─────────────────────────────────────────────────┼────────────┼───────────────┤\n│ COHASSET │ MA │ 230 │ UNUM GROUP POLITICAL ACTION COMMITTEE (UNUMPAC) │ 2017-01-31 │ primary │\n│ KEY LARGO │ FL │ 5000 │ UNUM GROUP POLITICAL ACTION COMMITTEE (UNUMPAC) │ 2017-01-04 │ primary │\n│ LOOKOUT MOUNTAIN │ GA │ 230 │ UNUM GROUP POLITICAL ACTION COMMITTEE (UNUMPAC) │ 2017-01-31 │ primary │\n│ NORTH YARMOUTH │ ME │ 384 │ UNUM GROUP POLITICAL ACTION COMMITTEE (UNUMPAC) │ 2017-01-31 │ primary │\n│ ALPHARETTA │ GA │ 384 │ UNUM GROUP POLITICAL ACTION COMMITTEE (UNUMPAC) │ 2017-01-31 │ primary │\n│ FALMOUTH │ ME │ 384 │ UNUM GROUP POLITICAL ACTION COMMITTEE (UNUMPAC) │ 2017-01-31 │ primary │\n│ FALMOUTH │ ME │ 384 │ UNUM GROUP POLITICAL ACTION COMMITTEE (UNUMPAC) │ 2017-01-31 │ primary │\n│ HOLLIS CENTER │ ME │ 384 │ UNUM GROUP POLITICAL ACTION COMMITTEE (UNUMPAC) │ 2017-01-31 │ primary │\n│ FALMOUTH │ ME │ 384 │ UNUM GROUP POLITICAL ACTION COMMITTEE (UNUMPAC) │ 2017-01-31 │ primary │\n│ ALEXANDRIA │ VA │ 384 │ UNUM GROUP POLITICAL ACTION COMMITTEE (UNUMPAC) │ 2017-01-31 │ primary │\n│ … │ … │ … │ … │ … │ … │\n└──────────────────┴────────┴─────────────────┴─────────────────────────────────────────────────┴────────────┴───────────────┘\n\n```\n:::\n:::\n\n\nThat worked well! There are 0 nulls in the resulting column, so we always were\nable to determine the election type.\n\n::: {#e7038c36 .cell execution_count=15}\n``` {.python .cell-code}\ncleaned.election_type.topk(10)\n```\n\n::: {.cell-output .cell-output-display execution_count=15}\n```{=html}\n
┏━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━┓\n┃ election_type ┃ Count(election_type) ┃\n┡━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━┩\n│ string │ int64 │\n├───────────────┼──────────────────────┤\n│ primary │ 19061953 │\n│ general │ 2216685 │\n│ other │ 161965 │\n│ special │ 149572 │\n│ runoff │ 69637 │\n│ convention │ 22453 │\n│ recount │ 5063 │\n│ NULL │ 0 │\n└───────────────┴──────────────────────┘\n\n```\n:::\n:::\n\n\nAbout 1/20 of transactions are negative. These could represent refunds, or they\ncould be data entry errors. Let's drop them to keep it simple.\n\n::: {#ab64b9b2 .cell execution_count=16}\n``` {.python .cell-code}\nabove_zero = cleaned.TRANSACTION_AMT > 0\ncleaned = cleaned[above_zero]\nabove_zero.value_counts()\n```\n\n::: {.cell-output .cell-output-display execution_count=16}\n```{=html}\n
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓\n┃ Greater(TRANSACTION_AMT, 0) ┃ Greater(TRANSACTION_AMT, 0)_count ┃\n┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┩\n│ boolean │ int64 │\n├─────────────────────────────┼───────────────────────────────────┤\n│ True │ 20669809 │\n│ False │ 1018183 │\n└─────────────────────────────┴───────────────────────────────────┘\n\n```\n:::\n:::\n\n\n## Adding Features\n\nNow that the data is cleaned up to a usable format, let's add some features.\n\nFirst, it's useful to categorize donations by size, placing them into buckets\nof small, medium, large, etc.\n\n::: {#db1e9cbe .cell execution_count=17}\n``` {.python .cell-code}\nedges = [\n 10,\n 50,\n 100,\n 500,\n 1000,\n 5000,\n]\nlabels = [\n \"<10\",\n \"10-50\",\n \"50-100\",\n \"100-500\",\n \"500-1000\",\n \"1000-5000\",\n \"5000+\",\n]\n\n\ndef bucketize(vals, edges, str_labels):\n # Uses Ibis's .bucket() method to create a categorical column\n int_labels = vals.bucket(edges, include_under=True, include_over=True)\n # Map the integer labels to the string labels\n int_to_str = {str(i): s for i, s in enumerate(str_labels)}\n return int_labels.cast(str).substitute(int_to_str)\n\n\nfeatured = cleaned.mutate(amount_bucket=bucketize(_.TRANSACTION_AMT, edges, labels))\nfeatured\n```\n\n::: {.cell-output .cell-output-display execution_count=17}\n```{=html}\n
┏━━━━━━━━━━━━━━━━━━┳━━━━━━━━┳━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━┳━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━┓\n┃ CITY ┃ STATE ┃ TRANSACTION_AMT ┃ CMTE_NM ┃ date ┃ election_type ┃ amount_bucket ┃\n┡━━━━━━━━━━━━━━━━━━╇━━━━━━━━╇━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━╇━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━┩\n│ string │ string │ int64 │ string │ date │ string │ string │\n├──────────────────┼────────┼─────────────────┼─────────────────────────────────────────────────┼────────────┼───────────────┼───────────────┤\n│ COHASSET │ MA │ 230 │ UNUM GROUP POLITICAL ACTION COMMITTEE (UNUMPAC) │ 2017-01-31 │ primary │ 100-500 │\n│ KEY LARGO │ FL │ 5000 │ UNUM GROUP POLITICAL ACTION COMMITTEE (UNUMPAC) │ 2017-01-04 │ primary │ 1000-5000 │\n│ LOOKOUT MOUNTAIN │ GA │ 230 │ UNUM GROUP POLITICAL ACTION COMMITTEE (UNUMPAC) │ 2017-01-31 │ primary │ 100-500 │\n│ NORTH YARMOUTH │ ME │ 384 │ UNUM GROUP POLITICAL ACTION COMMITTEE (UNUMPAC) │ 2017-01-31 │ primary │ 100-500 │\n│ ALPHARETTA │ GA │ 384 │ UNUM GROUP POLITICAL ACTION COMMITTEE (UNUMPAC) │ 2017-01-31 │ primary │ 100-500 │\n│ FALMOUTH │ ME │ 384 │ UNUM GROUP POLITICAL ACTION COMMITTEE (UNUMPAC) │ 2017-01-31 │ primary │ 100-500 │\n│ FALMOUTH │ ME │ 384 │ UNUM GROUP POLITICAL ACTION COMMITTEE (UNUMPAC) │ 2017-01-31 │ primary │ 100-500 │\n│ HOLLIS CENTER │ ME │ 384 │ UNUM GROUP POLITICAL ACTION COMMITTEE (UNUMPAC) │ 2017-01-31 │ primary │ 100-500 │\n│ FALMOUTH │ ME │ 384 │ UNUM GROUP POLITICAL ACTION COMMITTEE (UNUMPAC) │ 2017-01-31 │ primary │ 100-500 │\n│ ALEXANDRIA │ VA │ 384 │ UNUM GROUP POLITICAL ACTION COMMITTEE (UNUMPAC) │ 2017-01-31 │ primary │ 100-500 │\n│ … │ … │ … │ … │ … │ … │ … │\n└──────────────────┴────────┴─────────────────┴─────────────────────────────────────────────────┴────────────┴───────────────┴───────────────┘\n\n```\n:::\n:::\n\n\n## Analysis\n\n### By donation size\n\nOne thing we can look at is the donation breakdown by size:\n- Are most donations small or large?\n- Where do politicians/committees get most of their money from? Large or small donations?\n\nWe also will compare performance of Ibis vs pandas during this groupby.\n\n::: {#2c306d0f .cell execution_count=18}\n``` {.python .cell-code}\ndef summary_by(table, by):\n return table.group_by(by).agg(\n n_donations=_.count(),\n total_amount=_.TRANSACTION_AMT.sum(),\n mean_amount=_.TRANSACTION_AMT.mean(),\n median_amount=_.TRANSACTION_AMT.approx_median(),\n )\n\n\ndef summary_by_pandas(df, by):\n return df.groupby(by, as_index=False).agg(\n n_donations=(\"election_type\", \"count\"),\n total_amount=(\"TRANSACTION_AMT\", \"sum\"),\n mean_amount=(\"TRANSACTION_AMT\", \"mean\"),\n median_amount=(\"TRANSACTION_AMT\", \"median\"),\n )\n\n\n# persist the input data so the following timings of the group_by are accurate.\nsubset = featured[\"election_type\", \"amount_bucket\", \"TRANSACTION_AMT\"]\nsubset = subset.cache()\npandas_subset = subset.execute()\n```\n:::\n\n\nLet's take a look at what we are actually computing:\n\n::: {#a621ca5f .cell execution_count=19}\n``` {.python .cell-code}\nby_type_and_bucket = summary_by(subset, [\"election_type\", \"amount_bucket\"])\nby_type_and_bucket\n```\n\n::: {.cell-output .cell-output-display execution_count=19}\n```{=html}\n
┏━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━┳━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━┓\n┃ election_type ┃ amount_bucket ┃ n_donations ┃ total_amount ┃ mean_amount ┃ median_amount ┃\n┡━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━╇━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━┩\n│ string │ string │ int64 │ int64 │ float64 │ int64 │\n├───────────────┼───────────────┼─────────────┼──────────────┼──────────────┼───────────────┤\n│ primary │ 50-100 │ 2663933 │ 155426540 │ 58.344763 │ 50 │\n│ primary │ 10-50 │ 8115403 │ 187666251 │ 23.124699 │ 25 │\n│ primary │ 100-500 │ 3636287 │ 637353634 │ 175.275943 │ 150 │\n│ primary │ <10 │ 2423728 │ 10080721 │ 4.159180 │ 5 │\n│ primary │ 500-1000 │ 634677 │ 334630687 │ 527.245649 │ 500 │\n│ primary │ 1000-5000 │ 684755 │ 1231394874 │ 1798.299938 │ 1008 │\n│ primary │ 5000+ │ 44085 │ 1558371116 │ 35349.237065 │ 10000 │\n│ general │ 100-500 │ 700821 │ 123174568 │ 175.757530 │ 150 │\n│ general │ 50-100 │ 304363 │ 16184312 │ 53.174374 │ 50 │\n│ general │ 10-50 │ 660787 │ 14411588 │ 21.809733 │ 25 │\n│ … │ … │ … │ … │ … │ … │\n└───────────────┴───────────────┴─────────────┴──────────────┴──────────────┴───────────────┘\n\n```\n:::\n:::\n\n\nOK, now let's do our timings.\n\nOne interesting thing to pay attention to here is the execution time for the following\ngroupby. Before, we could get away with lazy execution: because we only wanted to preview\nthe first few rows, we only had to compute the first few rows, so all our previews were\nvery fast.\n\nBut now, as soon as we do a groupby, we have to actually go through the whole dataset\nin order to compute the aggregate per group. So this is going to be slower. BUT,\nduckdb is still quite fast. It only takes milliseconds to groupby-agg all 20 million rows!\n\n::: {#fc3694c3 .cell execution_count=20}\n``` {.python .cell-code}\n%timeit summary_by(subset, [\"election_type\", \"amount_bucket\"]).execute() # .execute() so we actually fetch the data\n```\n\n::: {.cell-output .cell-output-stdout}\n```\n679 ms ± 11.6 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)\n```\n:::\n:::\n\n\nNow let's try the same thing in pandas:\n\n::: {#ab990661 .cell execution_count=21}\n``` {.python .cell-code}\n%timeit summary_by_pandas(pandas_subset, [\"election_type\", \"amount_bucket\"])\n```\n\n::: {.cell-output .cell-output-stdout}\n```\n3.59 s ± 31.3 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)\n```\n:::\n:::\n\n\nIt takes about 4 seconds, which is about 10 times slower than duckdb.\nAt this scale, it again doesn't matter,\nbut you could imagine with a dataset much larger than this, it would matter.\n\nLet's also think about memory usage:\n\n::: {#03834f0b .cell execution_count=22}\n``` {.python .cell-code}\npandas_subset.memory_usage(deep=True).sum() / 1e9 # GB\n```\n\n::: {.cell-output .cell-output-display execution_count=22}\n```\n2.782586663\n```\n:::\n:::\n\n\nThe source dataframe is couple gigabytes, so probably during the groupby,\nthe peak memory usage is going to be a bit higher than this. You could use a profiler\nsuch as [FIL](https://github.com/pythonspeed/filprofiler) if you wanted an exact number,\nI was too lazy to use that here.\n\nAgain, this works on my laptop at this dataset size, but much larger than this and I'd\nstart having problems. Duckdb on the other hand is designed around working out of core\nso it should scale to datasets into the hundreds of gigabytes, much larger than your\ncomputer's RAM.\n\n### Back to analysis\n\nOK, let's plot the result of that groupby.\n\nSurprise! (Or maybe not...) Most donations are small. But most of the money comes\nfrom donations larger than $1000.\n\nWell if that's the case, why do politicians spend so much time soliciting small\ndonations? One explanation is that they can use the number of donations\nas a marketing pitch, to show how popular they are, and thus how viable of a\ncandidate they are.\n\nThis also might explain whose interests are being served by our politicians.\n\n::: {#cf2c035e .cell execution_count=23}\n``` {.python .cell-code}\nimport altair as alt\n\n# Do some bookkeeping so the buckets are displayed smallest to largest on the charts\nbucket_col = alt.Column(\"amount_bucket:N\", sort=labels)\n\nn_by_bucket = (\n alt.Chart(by_type_and_bucket.execute())\n .mark_bar()\n .encode(\n x=bucket_col,\n y=\"n_donations:Q\",\n color=\"election_type:N\",\n )\n)\ntotal_by_bucket = (\n alt.Chart(by_type_and_bucket.execute())\n .mark_bar()\n .encode(\n x=bucket_col,\n y=\"total_amount:Q\",\n color=\"election_type:N\",\n )\n)\nn_by_bucket | total_by_bucket\n```\n\n::: {.cell-output .cell-output-display execution_count=23}\n```{=html}\n\n\n\n\n```\n:::\n:::\n\n\n### By election stage\n\nLet's look at how donations break down by election stage. Do people donate\ndifferently for primary elections vs general elections?\n\nLet's ignore everything but primary and general elections, since they are the\nmost common, and arguably the most important.\n\n::: {#92651642 .cell execution_count=24}\n``` {.python .cell-code}\ngb2 = by_type_and_bucket[_.election_type.isin((\"primary\", \"general\"))]\nn_donations_per_election_type = _.n_donations.sum().over(group_by=\"election_type\")\nfrac = _.n_donations / n_donations_per_election_type\ngb2 = gb2.mutate(frac_n_donations_per_election_type=frac)\ngb2\n```\n\n::: {.cell-output .cell-output-display execution_count=24}\n```{=html}\n
┏━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━┳━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓\n┃ election_type ┃ amount_bucket ┃ n_donations ┃ total_amount ┃ mean_amount ┃ median_amount ┃ frac_n_donations_per_election_type ┃\n┡━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━╇━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┩\n│ string │ string │ int64 │ int64 │ float64 │ int64 │ float64 │\n├───────────────┼───────────────┼─────────────┼──────────────┼──────────────┼───────────────┼────────────────────────────────────┤\n│ primary │ 10-50 │ 8115403 │ 187666251 │ 23.124699 │ 25 │ 0.445831 │\n│ primary │ <10 │ 2423728 │ 10080721 │ 4.159180 │ 5 │ 0.133151 │\n│ primary │ 100-500 │ 3636287 │ 637353634 │ 175.275943 │ 150 │ 0.199765 │\n│ primary │ 50-100 │ 2663933 │ 155426540 │ 58.344763 │ 50 │ 0.146347 │\n│ primary │ 500-1000 │ 634677 │ 334630687 │ 527.245649 │ 500 │ 0.034867 │\n│ primary │ 1000-5000 │ 684755 │ 1231394874 │ 1798.299938 │ 1008 │ 0.037618 │\n│ primary │ 5000+ │ 44085 │ 1558371116 │ 35349.237065 │ 10000 │ 0.002422 │\n│ general │ 50-100 │ 304363 │ 16184312 │ 53.174374 │ 50 │ 0.138017 │\n│ general │ 100-500 │ 700821 │ 123174568 │ 175.757530 │ 150 │ 0.317796 │\n│ general │ 500-1000 │ 174182 │ 91015697 │ 522.532162 │ 500 │ 0.078985 │\n│ … │ … │ … │ … │ … │ … │ … │\n└───────────────┴───────────────┴─────────────┴──────────────┴──────────────┴───────────────┴────────────────────────────────────┘\n\n```\n:::\n:::\n\n\nIt looks like primary elections get a larger proportion of small donations.\n\n::: {#fd42d9bf .cell execution_count=25}\n``` {.python .cell-code}\nalt.Chart(gb2.execute()).mark_bar().encode(\n x=\"election_type:O\",\n y=\"frac_n_donations_per_election_type:Q\",\n color=bucket_col,\n)\n```\n\n::: {.cell-output .cell-output-display execution_count=25}\n```{=html}\n\n\n\n\n```\n:::\n:::\n\n\n### By recipient\n\nLet's look at the top players. Who gets the most donations?\n\nFar and away it is ActBlue, which acts as a conduit for donations to Democratic\ninterests.\n\nBeto O'Rourke is the top individual politician, hats off to him!\n\n::: {#e844f42e .cell execution_count=26}\n``` {.python .cell-code}\nby_recip = summary_by(featured, \"CMTE_NM\")\nby_recip\n```\n\n::: {.cell-output .cell-output-display execution_count=26}\n```{=html}\n
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━┳━━━━━━━━━━━━━━┳━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━┓\n┃ CMTE_NM ┃ n_donations ┃ total_amount ┃ mean_amount ┃ median_amount ┃\n┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━╇━━━━━━━━━━━━━━╇━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━┩\n│ string │ int64 │ int64 │ float64 │ int64 │\n├──────────────────────────────────────────────────────────────────────────────────┼─────────────┼──────────────┼─────────────┼───────────────┤\n│ EXELON CORPORATION POLITICAL ACTION COMMITTEE (EXELON PAC) │ 13250 │ 1939503 │ 146.377585 │ 118 │\n│ ARCHER DANIELS MIDLAND COMPANY-ADM PAC │ 4460 │ 275807 │ 61.840135 │ 25 │\n│ PFIZER INC. PAC │ 46900 │ 1948689 │ 41.549872 │ 20 │\n│ SUEZ WATER INC. FEDERAL PAC │ 108 │ 16873 │ 156.231481 │ 120 │\n│ INTERNATIONAL WAREHOUSE LOGISTICS ASSOCIATION PAC │ 90 │ 132200 │ 1468.888889 │ 1000 │\n│ BAKERY, CONFECTIONERY, TOBACCO WORKERS AND GRAIN MILLERS INTERNATIONAL UNION PAC │ 387 │ 19091 │ 49.330749 │ 30 │\n│ UNION PACIFIC CORP. FUND FOR EFFECTIVE GOVERNMENT │ 16118 │ 2436963 │ 151.195123 │ 114 │\n│ NATIONAL ASSOCIATION OF REALTORS POLITICAL ACTION COMMITTEE │ 24277 │ 5492063 │ 226.224945 │ 154 │\n│ AMERICAN FINANCIAL SERVICES ASSOCIATION PAC │ 690 │ 685839 │ 993.969565 │ 65 │\n│ WEYERHAEUSER COMPANY POLITICAL ACTION COMMITTEE │ 5512 │ 343244 │ 62.272134 │ 30 │\n│ … │ … │ … │ … │ … │\n└──────────────────────────────────────────────────────────────────────────────────┴─────────────┴──────────────┴─────────────┴───────────────┘\n\n```\n:::\n:::\n\n\n::: {#a0c1efd8 .cell execution_count=27}\n``` {.python .cell-code}\ntop_recip = by_recip.order_by(ibis.desc(\"n_donations\")).head(10)\nalt.Chart(top_recip.execute()).mark_bar().encode(\n x=alt.X(\"CMTE_NM:O\", sort=\"-y\"),\n y=\"n_donations:Q\",\n)\n```\n\n::: {.cell-output .cell-output-display execution_count=27}\n```{=html}\n\n\n\n\n```\n:::\n:::\n\n\n### By Location\n\nWhere are the largest donations coming from?\n\n::: {#3348eca1 .cell execution_count=28}\n``` {.python .cell-code}\nf2 = featured.mutate(loc=_.CITY + \", \" + _.STATE).drop(\"CITY\", \"STATE\")\nby_loc = summary_by(f2, \"loc\")\n# Drop the places with a small number of donations so we're\n# resistant to outliers for the mean\nby_loc = by_loc[_.n_donations > 1000]\nby_loc\n```\n\n::: {.cell-output .cell-output-display execution_count=28}\n```{=html}\n
┏━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━┳━━━━━━━━━━━━━━┳━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━┓\n┃ loc ┃ n_donations ┃ total_amount ┃ mean_amount ┃ median_amount ┃\n┡━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━╇━━━━━━━━━━━━━━╇━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━┩\n│ string │ int64 │ int64 │ float64 │ int64 │\n├──────────────────┼─────────────┼──────────────┼─────────────┼───────────────┤\n│ DALLAS, TX │ 154038 │ 66558403 │ 432.090802 │ 58 │\n│ PHILADELPHIA, PA │ 222938 │ 36054977 │ 161.726476 │ 62 │\n│ MALIBU, CA │ 11699 │ 4934763 │ 421.810668 │ 50 │\n│ SANTEE, CA │ 2454 │ 201274 │ 82.018745 │ 26 │\n│ WINNETKA, IL │ 8589 │ 5621809 │ 654.535918 │ 172 │\n│ OREM, UT │ 2110 │ 837475 │ 396.907583 │ 50 │\n│ MESA, AZ │ 22128 │ 1856636 │ 83.904375 │ 20 │\n│ WAYZATA, MN │ 6488 │ 3326275 │ 512.681104 │ 117 │\n│ MINNETONKA, MN │ 5709 │ 1187881 │ 208.071641 │ 50 │\n│ OJAI, CA │ 4496 │ 926422 │ 206.054715 │ 25 │\n│ … │ … │ … │ … │ … │\n└──────────────────┴─────────────┴──────────────┴─────────────┴───────────────┘\n\n```\n:::\n:::\n\n\n::: {#95c93760 .cell execution_count=29}\n``` {.python .cell-code}\ndef top_by(col):\n top = by_loc.order_by(ibis.desc(col)).head(10)\n return (\n alt.Chart(top.execute())\n .mark_bar()\n .encode(\n x=alt.X('loc:O', sort=\"-y\"),\n y=col,\n )\n )\n\n\ntop_by(\"n_donations\") | top_by(\"total_amount\") | top_by(\"mean_amount\") | top_by(\n \"median_amount\"\n)\n```\n\n::: {.cell-output .cell-output-display execution_count=29}\n```{=html}\n\n\n\n\n```\n:::\n:::\n\n\n### By month\n\nWhen do the donations come in?\n\n::: {#6d0776d2 .cell execution_count=30}\n``` {.python .cell-code}\nby_month = summary_by(featured, _.date.month().name(\"month_int\"))\n# Sorta hacky, .substritute doesn't work to change dtypes (yet?)\n# so we cast to string and then do our mapping\nmonth_map = {\n \"1\": \"Jan\",\n \"2\": \"Feb\",\n \"3\": \"Mar\",\n \"4\": \"Apr\",\n \"5\": \"May\",\n \"6\": \"Jun\",\n \"7\": \"Jul\",\n \"8\": \"Aug\",\n \"9\": \"Sep\",\n \"10\": \"Oct\",\n \"11\": \"Nov\",\n \"12\": \"Dec\",\n}\nby_month = by_month.mutate(month_str=_.month_int.cast(str).substitute(month_map))\nby_month\n```\n\n::: {.cell-output .cell-output-display execution_count=30}\n```{=html}\n
┏━━━━━━━━━━━┳━━━━━━━━━━━━━┳━━━━━━━━━━━━━━┳━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━┳━━━━━━━━━━━┓\n┃ month_int ┃ n_donations ┃ total_amount ┃ mean_amount ┃ median_amount ┃ month_str ┃\n┡━━━━━━━━━━━╇━━━━━━━━━━━━━╇━━━━━━━━━━━━━━╇━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━╇━━━━━━━━━━━┩\n│ int32 │ int64 │ int64 │ float64 │ int64 │ string │\n├───────────┼─────────────┼──────────────┼─────────────┼───────────────┼───────────┤\n│ NULL │ 1514 │ 250297 │ 165.321664 │ 99 │ NULL │\n│ 1 │ 348979 │ 174837854 │ 500.998209 │ 122 │ Jan │\n│ 2 │ 581646 │ 255997655 │ 440.126219 │ 100 │ Feb │\n│ 3 │ 1042577 │ 430906797 │ 413.309326 │ 81 │ Mar │\n│ 4 │ 1088244 │ 299252692 │ 274.986760 │ 50 │ Apr │\n│ 5 │ 1374247 │ 387317192 │ 281.839576 │ 48 │ May │\n│ 6 │ 1667285 │ 465305247 │ 279.079610 │ 44 │ Jun │\n│ 7 │ 1607053 │ 320528605 │ 199.451172 │ 35 │ Jul │\n│ 8 │ 2023466 │ 473544182 │ 234.026261 │ 35 │ Aug │\n│ 9 │ 2583847 │ 697888624 │ 270.096729 │ 38 │ Sep │\n│ … │ … │ … │ … │ … │ … │\n└───────────┴─────────────┴──────────────┴─────────────┴───────────────┴───────────┘\n\n```\n:::\n:::\n\n\n::: {#a2b27c61 .cell execution_count=31}\n``` {.python .cell-code}\nmonths_in_order = list(month_map.values())\nalt.Chart(by_month.execute()).mark_bar().encode(\n x=alt.X(\"month_str:O\", sort=months_in_order),\n y=\"n_donations:Q\",\n)\n```\n\n::: {.cell-output .cell-output-display execution_count=31}\n```{=html}\n\n\n\n\n```\n:::\n:::\n\n\n## Conclusion\n\nThanks for following along! I hope you've learned something about Ibis, and\nmaybe even about campaign finance.\n\nIbis is a great tool for exploring data. I now find myself reaching for it\nwhen in the past I would have reached for pandas.\n\nSome of the highlights for me:\n\n- Fast, lazy execution, a great display format, and good type hinting/editor support for a great REPL experience.\n- Very well thought-out API and semantics (e.g. `isinstance(val, NumericValue)`?? That's beautiful!)\n- Fast and fairly complete string support, since I work with a lot of text data.\n- Extremely responsive maintainers. Sometimes I've submitted multiple feature requests and bug reports in a single day, and a PR has been merged by the next day.\n- Escape hatch to SQL. I didn't have to use that here, but if something isn't supported, you can always fall back to SQL.\n\nCheck out [The Ibis Website](https://ibis-project.org/) for more information.\n\n", + "engine": "jupyter", + "markdown": "---\ntitle: \"Exploring campaign finance data\"\nauthor: \"Nick Crews\"\ndate: \"2023-03-24\"\ncategories:\n - blog\n - data engineering\n - case study\n - duckdb\n - performance\n---\n\nHi! My name is [Nick Crews](https://www.linkedin.com/in/nicholas-b-crews/),\nand I'm a data engineer that looks at public campaign finance data.\n\nIn this post, I'll walk through how I use Ibis to explore public campaign contribution\ndata from the Federal Election Commission (FEC). We'll do some loading,\ncleaning, featurizing, and visualization. There will be filtering, sorting, grouping,\nand aggregation.\n\n## Downloading The Data\n\n::: {#e29f35c8 .cell execution_count=2}\n``` {.python .cell-code}\nfrom pathlib import Path\nfrom zipfile import ZipFile\nfrom urllib.request import urlretrieve\n\n# Download and unzip the 2018 individual contributions data\nurl = \"https://cg-519a459a-0ea3-42c2-b7bc-fa1143481f74.s3-us-gov-west-1.amazonaws.com/bulk-downloads/2018/indiv18.zip\"\nzip_path = Path(\"indiv18.zip\")\ncsv_path = Path(\"indiv18.csv\")\n\nif not zip_path.exists():\n urlretrieve(url, zip_path)\n\nif not csv_path.exists():\n with ZipFile(zip_path) as zip_file, csv_path.open(\"w\") as csv_file:\n for line in zip_file.open(\"itcont.txt\"):\n csv_file.write(line.decode())\n```\n:::\n\n\n## Loading the data\n\nNow that we have our raw data in a .csv format, let's load it into Ibis,\nusing the duckdb backend.\n\nNote that a 4.3 GB .csv would be near the limit of what pandas could\nhandle on my laptop with 16GB of RAM. In pandas, typically every time\nyou perform a transformation on the data, a copy of the data is made.\nI could only do a few transformations before I ran out of memory.\n\nWith Ibis, this problem is solved in two different ways.\n\nFirst, because they are designed to work with very large datasets,\nmany (all?) SQL backends support out of core operations.\nThe data lives on disk, and are only loaded in a streaming fashion\nwhen needed, and then written back to disk as the operation is performed.\n\nSecond, unless you explicitly ask for it, Ibis makes use of lazy\nevaluation. This means that when you ask for a result, the\nresult is not persisted in memory. Only the original source\ndata is persisted. Everything else is derived from this on the fly.\n\n::: {#0a6991f4 .cell execution_count=3}\n``` {.python .cell-code}\nimport ibis\nfrom ibis import _\n\nibis.options.interactive = True\n\n# The raw .csv file doesn't have column names, so we will add them in the next step.\nraw = ibis.read_csv(csv_path)\nraw\n```\n\n::: {.cell-output .cell-output-display execution_count=16}\n```{=html}\n
┏━━━━━━━━━━━┳━━━━━━━━┳━━━━━━━━┳━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━┳━━━━━━━━┳━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━┳━━━━━━━━┳━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━┳━━━━━━━┳━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━┳━━━━━━━━━┳━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━┓\n┃ C00401224 ┃ A ┃ M6 ┃ P ┃ 201804059101866001 ┃ 24T ┃ IND ┃ STOUFFER, LEIGH ┃ AMSTELVEEN ┃ ZZ ┃ 1187RC ┃ MYSELF ┃ SELF EMPLOYED ┃ 05172017 ┃ 10 ┃ C00458000 ┃ SA11AI_81445687 ┃ 1217152 ┃ column18 ┃ EARMARKED FOR PROGRESSIVE CHANGE CAMPAIGN COMMITTEE (C00458000) ┃ 4050820181544765358 ┃\n┡━━━━━━━━━━━╇━━━━━━━━╇━━━━━━━━╇━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━╇━━━━━━━━╇━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━╇━━━━━━━━╇━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━╇━━━━━━━╇━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━╇━━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━┩\n│ string │ string │ string │ string │ int64 │ string │ string │ string │ string │ string │ string │ string │ string │ string │ int64 │ string │ string │ int64 │ string │ string │ int64 │\n├───────────┼────────┼────────┼────────┼────────────────────┼────────┼────────┼───────────────────┼──────────────┼────────┼───────────┼───────────────────┼─────────────────────────┼──────────┼───────┼───────────┼─────────────────┼─────────┼──────────┼─────────────────────────────────────────────────────────────────┼─────────────────────┤\n│ C00401224 │ A │ M6 │ P │ 201804059101867748 │ 24T │ IND │ STRAWS, JOYCE │ OCOEE │ FL │ 34761 │ SILVERSEA CRUISES │ RESERVATIONS SUPERVISOR │ 05182017 │ 10 │ C00000935 │ SA11AI_81592336 │ 1217152 │ NULL │ EARMARKED FOR DCCC (C00000935) │ 4050820181544770597 │\n│ C00401224 │ A │ M6 │ P │ 201804059101867748 │ 24T │ IND │ STRAWS, JOYCE │ OCOEE │ FL │ 34761 │ SILVERSEA CRUISES │ RESERVATIONS SUPERVISOR │ 05192017 │ 15 │ C00000935 │ SA11AI_81627562 │ 1217152 │ NULL │ EARMARKED FOR DCCC (C00000935) │ 4050820181544770598 │\n│ C00401224 │ A │ M6 │ P │ 201804059101865942 │ 24T │ IND │ STOTT, JIM │ CAPE NEDDICK │ ME │ 039020760 │ NONE │ NONE │ 05132017 │ 35 │ C00000935 │ SA11AI_81047921 │ 1217152 │ NULL │ EARMARKED FOR DCCC (C00000935) │ 4050820181544765179 │\n│ C00401224 │ A │ M6 │ P │ 201804059101865942 │ 24T │ IND │ STOTT, JIM │ CAPE NEDDICK │ ME │ 039020760 │ NONE │ NONE │ 05152017 │ 35 │ C00000935 │ SA11AI_81209209 │ 1217152 │ NULL │ EARMARKED FOR DCCC (C00000935) │ 4050820181544765180 │\n│ C00401224 │ A │ M6 │ P │ 201804059101865942 │ 24T │ IND │ STOTT, JIM │ CAPE NEDDICK │ ME │ 039020760 │ NONE │ NONE │ 05192017 │ 5 │ C00000935 │ SA11AI_81605223 │ 1217152 │ NULL │ EARMARKED FOR DCCC (C00000935) │ 4050820181544765181 │\n│ C00401224 │ A │ M6 │ P │ 201804059101865943 │ 24T │ IND │ STOTT, JIM │ CAPE NEDDICK │ ME │ 039020760 │ NONE │ NONE │ 05242017 │ 15 │ C00000935 │ SA11AI_82200022 │ 1217152 │ NULL │ EARMARKED FOR DCCC (C00000935) │ 4050820181544765182 │\n│ C00401224 │ A │ M6 │ P │ 201804059101865943 │ 24T │ IND │ STOTT, JIM │ CAPE NEDDICK │ ME │ 03902 │ NOT EMPLOYED │ NOT EMPLOYED │ 05292017 │ 100 │ C00213512 │ SA11AI_82589834 │ 1217152 │ NULL │ EARMARKED FOR NANCY PELOSI FOR CONGRESS (C00213512) │ 4050820181544765184 │\n│ C00401224 │ A │ M6 │ P │ 201804059101865944 │ 24T │ IND │ STOTT, JIM │ CAPE NEDDICK │ ME │ 039020760 │ NONE │ NONE │ 05302017 │ 35 │ C00000935 │ SA11AI_82643727 │ 1217152 │ NULL │ EARMARKED FOR DCCC (C00000935) │ 4050820181544765185 │\n│ C00401224 │ A │ M6 │ P │ 201804059101867050 │ 24T │ IND │ STRANGE, WINIFRED │ ANNA MSRIA │ FL │ 34216 │ NOT EMPLOYED │ NOT EMPLOYED │ 05162017 │ 25 │ C00000935 │ SA11AI_81325918 │ 1217152 │ NULL │ EARMARKED FOR DCCC (C00000935) │ 4050820181544768505 │\n│ C00401224 │ A │ M6 │ P │ 201804059101867051 │ 24T │ IND │ STRANGE, WINIFRED │ ANNA MSRIA │ FL │ 34216 │ NOT EMPLOYED │ NOT EMPLOYED │ 05232017 │ 25 │ C00000935 │ SA11AI_81991189 │ 1217152 │ NULL │ EARMARKED FOR DCCC (C00000935) │ 4050820181544768506 │\n│ … │ … │ … │ … │ … │ … │ … │ … │ … │ … │ … │ … │ … │ … │ … │ … │ … │ … │ … │ … │ … │\n└───────────┴────────┴────────┴────────┴────────────────────┴────────┴────────┴───────────────────┴──────────────┴────────┴───────────┴───────────────────┴─────────────────────────┴──────────┴───────┴───────────┴─────────────────┴─────────┴──────────┴─────────────────────────────────────────────────────────────────┴─────────────────────┘\n\n```\n:::\n:::\n\n\n::: {#ebb6e702 .cell execution_count=4}\n``` {.python .cell-code}\n# For a more comprehesive description of the columns and their meaning, see\n# https://www.fec.gov/campaign-finance-data/contributions-individuals-file-description/\ncolumns = {\n \"CMTE_ID\": \"keep\", # Committee ID\n \"AMNDT_IND\": \"drop\", # Amendment indicator. A = amendment, N = new, T = termination\n \"RPT_TP\": \"drop\", # Report type (monthly, quarterly, etc)\n \"TRANSACTION_PGI\": \"keep\", # Primary/general indicator\n \"IMAGE_NUM\": \"drop\", # Image number\n \"TRANSACTION_TP\": \"drop\", # Transaction type\n \"ENTITY_TP\": \"keep\", # Entity type\n \"NAME\": \"drop\", # Contributor name\n \"CITY\": \"keep\", # Contributor city\n \"STATE\": \"keep\", # Contributor state\n \"ZIP_CODE\": \"drop\", # Contributor zip code\n \"EMPLOYER\": \"drop\", # Contributor employer\n \"OCCUPATION\": \"drop\", # Contributor occupation\n \"TRANSACTION_DT\": \"keep\", # Transaction date\n \"TRANSACTION_AMT\": \"keep\", # Transaction amount\n # Other ID. For individual contributions will be null. For contributions from\n # other FEC committees, will be the committee ID of the other committee.\n \"OTHER_ID\": \"drop\",\n \"TRAN_ID\": \"drop\", # Transaction ID\n \"FILE_NUM\": \"drop\", # File number, unique number assigned to each report filed with the FEC\n \"MEMO_CD\": \"drop\", # Memo code\n \"MEMO_TEXT\": \"drop\", # Memo text\n \"SUB_ID\": \"drop\", # Submission ID. Unique number assigned to each transaction.\n}\n\nrenaming = {old: new for old, new in zip(raw.columns, columns.keys())}\nto_keep = [k for k, v in columns.items() if v == \"keep\"]\nkept = raw.relabel(renaming)[to_keep]\nkept\n```\n\n::: {.cell-output .cell-output-display execution_count=17}\n```{=html}\n
┏━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━┳━━━━━━━━━━━━━━┳━━━━━━━━┳━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━┓\n┃ CMTE_ID ┃ TRANSACTION_PGI ┃ ENTITY_TP ┃ CITY ┃ STATE ┃ TRANSACTION_DT ┃ TRANSACTION_AMT ┃\n┡━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━╇━━━━━━━━━━━━━━╇━━━━━━━━╇━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━┩\n│ string │ string │ string │ string │ string │ string │ int64 │\n├───────────┼─────────────────┼───────────┼──────────────┼────────┼────────────────┼─────────────────┤\n│ C00401224 │ P │ IND │ OCOEE │ FL │ 05182017 │ 10 │\n│ C00401224 │ P │ IND │ OCOEE │ FL │ 05192017 │ 15 │\n│ C00401224 │ P │ IND │ CAPE NEDDICK │ ME │ 05132017 │ 35 │\n│ C00401224 │ P │ IND │ CAPE NEDDICK │ ME │ 05152017 │ 35 │\n│ C00401224 │ P │ IND │ CAPE NEDDICK │ ME │ 05192017 │ 5 │\n│ C00401224 │ P │ IND │ CAPE NEDDICK │ ME │ 05242017 │ 15 │\n│ C00401224 │ P │ IND │ CAPE NEDDICK │ ME │ 05292017 │ 100 │\n│ C00401224 │ P │ IND │ CAPE NEDDICK │ ME │ 05302017 │ 35 │\n│ C00401224 │ P │ IND │ ANNA MSRIA │ FL │ 05162017 │ 25 │\n│ C00401224 │ P │ IND │ ANNA MSRIA │ FL │ 05232017 │ 25 │\n│ … │ … │ … │ … │ … │ … │ … │\n└───────────┴─────────────────┴───────────┴──────────────┴────────┴────────────────┴─────────────────┘\n\n```\n:::\n:::\n\n\n::: {#3f4ad522 .cell execution_count=5}\n``` {.python .cell-code}\n# 21 million rows\nkept.count()\n```\n\n::: {.cell-output .cell-output-display}\n```{=html}\n\n```\n:::\n\n::: {.cell-output .cell-output-display execution_count=18}\n\n::: {.ansi-escaped-output}\n```{=html}\n
┌──────────┐\n│ 21730730 │\n└──────────┘
\n```\n:::\n\n:::\n:::\n\n\nHuh, what's up with those timings? Previewing the head only took a fraction of a second,\nbut finding the number of rows took 10 seconds.\n\nThat's because duckdb is scanning the .csv file on the fly every time we access it.\nSo we only have to read the first few lines to get that preview,\nbut we have to read the whole file to get the number of rows.\n\nNote that this isn't a feature of Ibis, but a feature of Duckdb. This what I think is\none of the strengths of Ibis: Ibis itself doesn't have to implement any of the\noptimimizations or features of the backends. Those backends can focus on what they do\nbest, and Ibis can get those things for free.\n\nSo, let's tell duckdb to actually read in the file to its native format so later accesses\nwill be faster. This will be a ~20 seconds that we'll only have to pay once.\n\n::: {#c45e7319 .cell execution_count=6}\n``` {.python .cell-code}\nkept = kept.cache()\nkept\n```\n\n::: {.cell-output .cell-output-display execution_count=19}\n```{=html}\n┏━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━┳━━━━━━━━━━━━━━┳━━━━━━━━┳━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━┓\n┃ CMTE_ID ┃ TRANSACTION_PGI ┃ ENTITY_TP ┃ CITY ┃ STATE ┃ TRANSACTION_DT ┃ TRANSACTION_AMT ┃\n┡━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━╇━━━━━━━━━━━━━━╇━━━━━━━━╇━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━┩\n│ string │ string │ string │ string │ string │ string │ int64 │\n├───────────┼─────────────────┼───────────┼──────────────┼────────┼────────────────┼─────────────────┤\n│ C00401224 │ P │ IND │ OCOEE │ FL │ 05182017 │ 10 │\n│ C00401224 │ P │ IND │ OCOEE │ FL │ 05192017 │ 15 │\n│ C00401224 │ P │ IND │ CAPE NEDDICK │ ME │ 05132017 │ 35 │\n│ C00401224 │ P │ IND │ CAPE NEDDICK │ ME │ 05152017 │ 35 │\n│ C00401224 │ P │ IND │ CAPE NEDDICK │ ME │ 05192017 │ 5 │\n│ C00401224 │ P │ IND │ CAPE NEDDICK │ ME │ 05242017 │ 15 │\n│ C00401224 │ P │ IND │ CAPE NEDDICK │ ME │ 05292017 │ 100 │\n│ C00401224 │ P │ IND │ CAPE NEDDICK │ ME │ 05302017 │ 35 │\n│ C00401224 │ P │ IND │ ANNA MSRIA │ FL │ 05162017 │ 25 │\n│ C00401224 │ P │ IND │ ANNA MSRIA │ FL │ 05232017 │ 25 │\n│ … │ … │ … │ … │ … │ … │ … │\n└───────────┴─────────────────┴───────────┴──────────────┴────────┴────────────────┴─────────────────┘\n\n```\n:::\n:::\n\n\nLook, now accessing it only takes a fraction of a second!\n\n::: {#881326dd .cell execution_count=7}\n``` {.python .cell-code}\nkept.count()\n```\n\n::: {.cell-output .cell-output-display}\n```{=html}\n\n```\n:::\n\n::: {.cell-output .cell-output-display execution_count=20}\n\n::: {.ansi-escaped-output}\n```{=html}\n
┌──────────┐\n│ 21730730 │\n└──────────┘
\n```\n:::\n\n:::\n:::\n\n\n### Committees Data\n\nThe contributions only list an opaque `CMTE_ID` column. We want to know which actual\ncommittee this is. Let's load the committees table so we can lookup from\ncommittee ID to committee name.\n\n::: {#ae8760f6 .cell execution_count=8}\n``` {.python .cell-code}\ndef read_committees():\n committees_url = \"https://cg-519a459a-0ea3-42c2-b7bc-fa1143481f74.s3-us-gov-west-1.amazonaws.com/bulk-downloads/2018/committee_summary_2018.csv\"\n # This just creates a view, it doesn't actually fetch the data yet\n tmp = ibis.read_csv(committees_url)\n tmp = tmp[\"CMTE_ID\", \"CMTE_NM\"]\n # The raw table contains multiple rows for each committee id, so lets pick\n # an arbitrary row for each committee id as the representative name.\n deduped = tmp.group_by(\"CMTE_ID\").agg(CMTE_NM=_.CMTE_NM.arbitrary())\n return deduped\n\n\ncomms = read_committees().cache()\ncomms\n```\n\n::: {.cell-output .cell-output-display execution_count=21}\n```{=html}\n┏━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓\n┃ CMTE_ID ┃ CMTE_NM ┃\n┡━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┩\n│ string │ string │\n├───────────┼────────────────────────────────────────────────────────────────┤\n│ C00659441 │ JASON ORTITAY FOR CONGRESS │\n│ C00297911 │ TEXAS FORESTRY ASSOCIATION FORESTRY POLITICAL ACTION COMMITTEE │\n│ C00340745 │ WADDELL & REED FINANCIAL, INC. POLITICAL ACTION COMMITTEE │\n│ C00679217 │ CANTWELL-WARREN VICTORY FUND │\n│ C00101204 │ NATIONAL FISHERIES INSTITUTE (FISHPAC) │\n│ C00010520 │ MEREDITH CORPORATION EMPLOYEES FUND FOR BETTER GOVERNMENT │\n│ C00532788 │ LAFAYETTE COUNTY DEMOCRATIC PARTY │\n│ C00128561 │ TOLL BROS. INC. PAC │\n│ C00510958 │ WENDYROGERS.ORG │\n│ C00665604 │ COMMITTEE TO ELECT BILL EBBEN │\n│ … │ … │\n└───────────┴────────────────────────────────────────────────────────────────┘\n\n```\n:::\n:::\n\n\nNow add the committee name to the contributions table:\n\n::: {#8fe204d4 .cell execution_count=9}\n``` {.python .cell-code}\ntogether = kept.left_join(comms, \"CMTE_ID\").drop(\"CMTE_ID\", \"CMTE_ID_right\")\ntogether\n```\n\n::: {.cell-output .cell-output-display execution_count=22}\n```{=html}\n
┏━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━┳━━━━━━━━┳━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓\n┃ TRANSACTION_PGI ┃ ENTITY_TP ┃ CITY ┃ STATE ┃ TRANSACTION_DT ┃ TRANSACTION_AMT ┃ CMTE_NM ┃\n┡━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━╇━━━━━━━━╇━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┩\n│ string │ string │ string │ string │ string │ int64 │ string │\n├─────────────────┼───────────┼──────────────────┼────────┼────────────────┼─────────────────┼─────────────────────────────────────────────────┤\n│ P │ IND │ COHASSET │ MA │ 01312017 │ 230 │ UNUM GROUP POLITICAL ACTION COMMITTEE (UNUMPAC) │\n│ P │ IND │ KEY LARGO │ FL │ 01042017 │ 5000 │ UNUM GROUP POLITICAL ACTION COMMITTEE (UNUMPAC) │\n│ P │ IND │ LOOKOUT MOUNTAIN │ GA │ 01312017 │ 230 │ UNUM GROUP POLITICAL ACTION COMMITTEE (UNUMPAC) │\n│ P │ IND │ NORTH YARMOUTH │ ME │ 01312017 │ 384 │ UNUM GROUP POLITICAL ACTION COMMITTEE (UNUMPAC) │\n│ P │ IND │ ALPHARETTA │ GA │ 01312017 │ 384 │ UNUM GROUP POLITICAL ACTION COMMITTEE (UNUMPAC) │\n│ P │ IND │ FALMOUTH │ ME │ 01312017 │ 384 │ UNUM GROUP POLITICAL ACTION COMMITTEE (UNUMPAC) │\n│ P │ IND │ FALMOUTH │ ME │ 01312017 │ 384 │ UNUM GROUP POLITICAL ACTION COMMITTEE (UNUMPAC) │\n│ P │ IND │ HOLLIS CENTER │ ME │ 01312017 │ 384 │ UNUM GROUP POLITICAL ACTION COMMITTEE (UNUMPAC) │\n│ P │ IND │ FALMOUTH │ ME │ 01312017 │ 384 │ UNUM GROUP POLITICAL ACTION COMMITTEE (UNUMPAC) │\n│ P │ IND │ ALEXANDRIA │ VA │ 01312017 │ 384 │ UNUM GROUP POLITICAL ACTION COMMITTEE (UNUMPAC) │\n│ … │ … │ … │ … │ … │ … │ … │\n└─────────────────┴───────────┴──────────────────┴────────┴────────────────┴─────────────────┴─────────────────────────────────────────────────┘\n\n```\n:::\n:::\n\n\n## Cleaning\n\nFirst, let's drop any contributions that don't have a committee name. There are only 6 of them.\n\n::: {#215670b2 .cell execution_count=10}\n``` {.python .cell-code}\n# We can do this fearlessly, no .copy() needed, because\n# everything in Ibis is immutable. If we did this in pandas,\n# we might start modifying the original DataFrame accidentally!\ncleaned = together\n\nhas_name = cleaned.CMTE_NM.notnull()\ncleaned = cleaned[has_name]\nhas_name.value_counts()\n```\n\n::: {.cell-output .cell-output-display execution_count=23}\n```{=html}\n
┏━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━┓\n┃ NotNull(CMTE_NM) ┃ NotNull(CMTE_NM)_count ┃\n┡━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━┩\n│ boolean │ int64 │\n├──────────────────┼────────────────────────┤\n│ True │ 21730724 │\n│ False │ 6 │\n└──────────────────┴────────────────────────┘\n\n```\n:::\n:::\n\n\nLet's look at the `ENTITY_TP` column. This represents the type of entity that\nmade the contribution:\n\n::: {#8e39507b .cell execution_count=11}\n``` {.python .cell-code}\ntogether.ENTITY_TP.value_counts()\n```\n\n::: {.cell-output .cell-output-display execution_count=24}\n```{=html}\n
┏━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━┓\n┃ ENTITY_TP ┃ ENTITY_TP_count ┃\n┡━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━┩\n│ string │ int64 │\n├───────────┼─────────────────┤\n│ NULL │ 5289 │\n│ CAN │ 13659 │\n│ COM │ 867 │\n│ IND │ 21687992 │\n│ ORG │ 18555 │\n│ PAC │ 3621 │\n│ PTY │ 49 │\n│ CCM │ 698 │\n└───────────┴─────────────────┘\n\n```\n:::\n:::\n\n\nWe only care about contributions from individuals.\n\nOnce we filter on this column, the contents of it are irrelevant, so let's drop it.\n\n::: {#e1453e27 .cell execution_count=12}\n``` {.python .cell-code}\ncleaned = together[_.ENTITY_TP == \"IND\"].drop(\"ENTITY_TP\")\n```\n:::\n\n\nIt looks like the `TRANSACTION_DT` column was a raw string like \"MMDDYYYY\",\nso let's convert that to a proper date type.\n\n::: {#bf3dadc7 .cell execution_count=13}\n``` {.python .cell-code}\nfrom ibis.expr.types import StringValue, DateValue\n\n\ndef mmddyyyy_to_date(val: StringValue) -> DateValue:\n return val.cast(str).lpad(8, \"0\").to_timestamp(\"%m%d%Y\").date()\n\n\ncleaned = cleaned.mutate(date=mmddyyyy_to_date(_.TRANSACTION_DT)).drop(\"TRANSACTION_DT\")\ncleaned\n```\n\n::: {.cell-output .cell-output-display execution_count=26}\n```{=html}\n
┏━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━┳━━━━━━━━┳━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━┓\n┃ TRANSACTION_PGI ┃ CITY ┃ STATE ┃ TRANSACTION_AMT ┃ CMTE_NM ┃ date ┃\n┡━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━╇━━━━━━━━╇━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━┩\n│ string │ string │ string │ int64 │ string │ date │\n├─────────────────┼──────────────────┼────────┼─────────────────┼─────────────────────────────────────────────────┼────────────┤\n│ P │ COHASSET │ MA │ 230 │ UNUM GROUP POLITICAL ACTION COMMITTEE (UNUMPAC) │ 2017-01-31 │\n│ P │ KEY LARGO │ FL │ 5000 │ UNUM GROUP POLITICAL ACTION COMMITTEE (UNUMPAC) │ 2017-01-04 │\n│ P │ LOOKOUT MOUNTAIN │ GA │ 230 │ UNUM GROUP POLITICAL ACTION COMMITTEE (UNUMPAC) │ 2017-01-31 │\n│ P │ NORTH YARMOUTH │ ME │ 384 │ UNUM GROUP POLITICAL ACTION COMMITTEE (UNUMPAC) │ 2017-01-31 │\n│ P │ ALPHARETTA │ GA │ 384 │ UNUM GROUP POLITICAL ACTION COMMITTEE (UNUMPAC) │ 2017-01-31 │\n│ P │ FALMOUTH │ ME │ 384 │ UNUM GROUP POLITICAL ACTION COMMITTEE (UNUMPAC) │ 2017-01-31 │\n│ P │ FALMOUTH │ ME │ 384 │ UNUM GROUP POLITICAL ACTION COMMITTEE (UNUMPAC) │ 2017-01-31 │\n│ P │ HOLLIS CENTER │ ME │ 384 │ UNUM GROUP POLITICAL ACTION COMMITTEE (UNUMPAC) │ 2017-01-31 │\n│ P │ FALMOUTH │ ME │ 384 │ UNUM GROUP POLITICAL ACTION COMMITTEE (UNUMPAC) │ 2017-01-31 │\n│ P │ ALEXANDRIA │ VA │ 384 │ UNUM GROUP POLITICAL ACTION COMMITTEE (UNUMPAC) │ 2017-01-31 │\n│ … │ … │ … │ … │ … │ … │\n└─────────────────┴──────────────────┴────────┴─────────────────┴─────────────────────────────────────────────────┴────────────┘\n\n```\n:::\n:::\n\n\nThe `TRANSACTION_PGI` column represents the type (primary, general, etc) of election,\nand the year. But it seems to be not very consistent:\n\n::: {#6cb98e2b .cell execution_count=14}\n``` {.python .cell-code}\ncleaned.TRANSACTION_PGI.topk(10)\n```\n\n::: {.cell-output .cell-output-display execution_count=27}\n```{=html}\n
┏━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━┓\n┃ TRANSACTION_PGI ┃ CountStar() ┃\n┡━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━┩\n│ string │ int64 │\n├─────────────────┼─────────────┤\n│ P │ 17013596 │\n│ G2018 │ 2095123 │\n│ P2018 │ 1677183 │\n│ P2020 │ 208501 │\n│ O2018 │ 161874 │\n│ S2017 │ 124336 │\n│ G2017 │ 98401 │\n│ P2022 │ 91136 │\n│ P2017 │ 61153 │\n│ R2017 │ 54281 │\n└─────────────────┴─────────────┘\n\n```\n:::\n:::\n\n\n::: {#463caa6b .cell execution_count=15}\n``` {.python .cell-code}\ndef get_election_type(pgi: StringValue) -> StringValue:\n \"\"\"Use the first letter of the TRANSACTION_PGI column to determine the election type\n\n If the first letter is not one of the known election stage, then return null.\n \"\"\"\n election_types = {\n \"P\": \"primary\",\n \"G\": \"general\",\n \"O\": \"other\",\n \"C\": \"convention\",\n \"R\": \"runoff\",\n \"S\": \"special\",\n \"E\": \"recount\",\n }\n first_letter = pgi[0]\n return first_letter.substitute(election_types, else_=ibis.null())\n\n\ncleaned = cleaned.mutate(election_type=get_election_type(_.TRANSACTION_PGI)).drop(\n \"TRANSACTION_PGI\"\n)\ncleaned\n```\n\n::: {.cell-output .cell-output-display execution_count=28}\n```{=html}\n
┏━━━━━━━━━━━━┳━━━━━━━━┳━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━┳━━━━━━━━━━━━━━━┓\n┃ CITY ┃ STATE ┃ TRANSACTION_AMT ┃ CMTE_NM ┃ date ┃ election_type ┃\n┡━━━━━━━━━━━━╇━━━━━━━━╇━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━╇━━━━━━━━━━━━━━━┩\n│ string │ string │ int64 │ string │ date │ string │\n├────────────┼────────┼─────────────────┼───────────────────────────┼────────────┼───────────────┤\n│ ATLANTA │ GA │ 15 │ NANCY PELOSI FOR CONGRESS │ 2017-06-20 │ primary │\n│ AUSTIN │ TX │ 15 │ NANCY PELOSI FOR CONGRESS │ 2017-06-04 │ primary │\n│ WASHINGTON │ DC │ 25 │ NANCY PELOSI FOR CONGRESS │ 2017-06-23 │ primary │\n│ HONOLULU │ HI │ 10 │ NANCY PELOSI FOR CONGRESS │ 2017-04-20 │ primary │\n│ MAMARONECK │ NY │ 110 │ NANCY PELOSI FOR CONGRESS │ 2017-06-02 │ primary │\n│ REHOBOTH │ MA │ 10 │ NANCY PELOSI FOR CONGRESS │ 2017-06-01 │ primary │\n│ BERKELEY │ CA │ 25 │ NANCY PELOSI FOR CONGRESS │ 2017-06-05 │ primary │\n│ BEAUMONT │ TX │ 25 │ NANCY PELOSI FOR CONGRESS │ 2017-04-12 │ primary │\n│ CONCORD │ MA │ 200 │ NANCY PELOSI FOR CONGRESS │ 2017-05-05 │ primary │\n│ OXNARD │ CA │ 15 │ NANCY PELOSI FOR CONGRESS │ 2017-03-31 │ primary │\n│ … │ … │ … │ … │ … │ … │\n└────────────┴────────┴─────────────────┴───────────────────────────┴────────────┴───────────────┘\n\n```\n:::\n:::\n\n\nThat worked well! There are 0 nulls in the resulting column, so we always were\nable to determine the election type.\n\n::: {#ead49c9e .cell execution_count=16}\n``` {.python .cell-code}\ncleaned.election_type.topk(10)\n```\n\n::: {.cell-output .cell-output-display execution_count=29}\n```{=html}\n
┏━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━┓\n┃ election_type ┃ CountStar() ┃\n┡━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━┩\n│ string │ int64 │\n├───────────────┼─────────────┤\n│ primary │ 19061953 │\n│ general │ 2216685 │\n│ other │ 161965 │\n│ special │ 149572 │\n│ runoff │ 69637 │\n│ convention │ 22453 │\n│ recount │ 5063 │\n│ NULL │ 664 │\n└───────────────┴─────────────┘\n\n```\n:::\n:::\n\n\nAbout 1/20 of transactions are negative. These could represent refunds, or they\ncould be data entry errors. Let's drop them to keep it simple.\n\n::: {#ee56a3f3 .cell execution_count=17}\n``` {.python .cell-code}\nabove_zero = cleaned.TRANSACTION_AMT > 0\ncleaned = cleaned[above_zero]\nabove_zero.value_counts()\n```\n\n::: {.cell-output .cell-output-display execution_count=30}\n```{=html}\n
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓\n┃ Greater(TRANSACTION_AMT, 0) ┃ Greater(TRANSACTION_AMT, 0)_count ┃\n┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┩\n│ boolean │ int64 │\n├─────────────────────────────┼───────────────────────────────────┤\n│ True │ 20669809 │\n│ False │ 1018183 │\n└─────────────────────────────┴───────────────────────────────────┘\n\n```\n:::\n:::\n\n\n## Adding Features\n\nNow that the data is cleaned up to a usable format, let's add some features.\n\nFirst, it's useful to categorize donations by size, placing them into buckets\nof small, medium, large, etc.\n\n::: {#0ccc57df .cell execution_count=18}\n``` {.python .cell-code}\nedges = [\n 10,\n 50,\n 100,\n 500,\n 1000,\n 5000,\n]\nlabels = [\n \"<10\",\n \"10-50\",\n \"50-100\",\n \"100-500\",\n \"500-1000\",\n \"1000-5000\",\n \"5000+\",\n]\n\n\ndef bucketize(vals, edges, str_labels):\n # Uses Ibis's .bucket() method to create a categorical column\n int_labels = vals.bucket(edges, include_under=True, include_over=True)\n # Map the integer labels to the string labels\n int_to_str = {str(i): s for i, s in enumerate(str_labels)}\n return int_labels.cast(str).substitute(int_to_str)\n\n\nfeatured = cleaned.mutate(amount_bucket=bucketize(_.TRANSACTION_AMT, edges, labels))\nfeatured\n```\n\n::: {.cell-output .cell-output-display execution_count=31}\n```{=html}\n
┏━━━━━━━━━━━━━━┳━━━━━━━━┳━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━┳━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━┓\n┃ CITY ┃ STATE ┃ TRANSACTION_AMT ┃ CMTE_NM ┃ date ┃ election_type ┃ amount_bucket ┃\n┡━━━━━━━━━━━━━━╇━━━━━━━━╇━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━╇━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━┩\n│ string │ string │ int64 │ string │ date │ string │ string │\n├──────────────┼────────┼─────────────────┼───────────────────────┼────────────┼───────────────┼───────────────┤\n│ REMINGTON │ IN │ 50 │ AMERICA'S LIBERTY PAC │ 2017-05-30 │ primary │ 50-100 │\n│ REMINGTON │ IN │ 50 │ AMERICA'S LIBERTY PAC │ 2017-06-05 │ primary │ 50-100 │\n│ VANCOUVER │ WA │ 100 │ AMERICA'S LIBERTY PAC │ 2017-06-07 │ primary │ 100-500 │\n│ SOLANA BEACH │ CA │ 500 │ AMERICA'S LIBERTY PAC │ 2017-06-26 │ primary │ 500-1000 │\n│ HILLSDALE │ MI │ 250 │ AMERICA'S LIBERTY PAC │ 2017-05-15 │ primary │ 100-500 │\n│ MIDDLEBURY │ VT │ 500 │ NBT PAC FEDERAL FUND │ 2017-06-05 │ primary │ 500-1000 │\n│ WILLISTON │ VT │ 500 │ NBT PAC FEDERAL FUND │ 2017-05-30 │ primary │ 500-1000 │\n│ GLENMONT │ NY │ 350 │ NBT PAC FEDERAL FUND │ 2017-06-01 │ primary │ 100-500 │\n│ NORWICH │ NY │ 250 │ NBT PAC FEDERAL FUND │ 2017-05-31 │ primary │ 100-500 │\n│ CLIFTON PARK │ NY │ 250 │ NBT PAC FEDERAL FUND │ 2017-06-26 │ primary │ 100-500 │\n│ … │ … │ … │ … │ … │ … │ … │\n└──────────────┴────────┴─────────────────┴───────────────────────┴────────────┴───────────────┴───────────────┘\n\n```\n:::\n:::\n\n\n## Analysis\n\n### By donation size\n\nOne thing we can look at is the donation breakdown by size:\n- Are most donations small or large?\n- Where do politicians/committees get most of their money from? Large or small donations?\n\nWe also will compare performance of Ibis vs pandas during this groupby.\n\n::: {#6c9dae32 .cell execution_count=19}\n``` {.python .cell-code}\ndef summary_by(table, by):\n return table.group_by(by).agg(\n n_donations=_.count(),\n total_amount=_.TRANSACTION_AMT.sum(),\n mean_amount=_.TRANSACTION_AMT.mean(),\n median_amount=_.TRANSACTION_AMT.approx_median(),\n )\n\n\ndef summary_by_pandas(df, by):\n return df.groupby(by, as_index=False).agg(\n n_donations=(\"election_type\", \"count\"),\n total_amount=(\"TRANSACTION_AMT\", \"sum\"),\n mean_amount=(\"TRANSACTION_AMT\", \"mean\"),\n median_amount=(\"TRANSACTION_AMT\", \"median\"),\n )\n\n\n# persist the input data so the following timings of the group_by are accurate.\nsubset = featured[\"election_type\", \"amount_bucket\", \"TRANSACTION_AMT\"]\nsubset = subset.cache()\npandas_subset = subset.execute()\n```\n:::\n\n\nLet's take a look at what we are actually computing:\n\n::: {#1b310e3e .cell execution_count=20}\n``` {.python .cell-code}\nby_type_and_bucket = summary_by(subset, [\"election_type\", \"amount_bucket\"])\nby_type_and_bucket\n```\n\n::: {.cell-output .cell-output-display execution_count=33}\n```{=html}\n
┏━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━┳━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━┓\n┃ election_type ┃ amount_bucket ┃ n_donations ┃ total_amount ┃ mean_amount ┃ median_amount ┃\n┡━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━╇━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━┩\n│ string │ string │ int64 │ int64 │ float64 │ int64 │\n├───────────────┼───────────────┼─────────────┼──────────────┼──────────────┼───────────────┤\n│ primary │ 500-1000 │ 634677 │ 334630687 │ 527.245649 │ 500 │\n│ general │ 5000+ │ 3125 │ 44496373 │ 14238.839360 │ 7537 │\n│ special │ 500-1000 │ 7811 │ 4003293 │ 512.519908 │ 500 │\n│ runoff │ 100-500 │ 18193 │ 3088289 │ 169.751498 │ 100 │\n│ convention │ 500-1000 │ 1824 │ 945321 │ 518.268092 │ 500 │\n│ general │ <10 │ 115873 │ 536742 │ 4.632158 │ 5 │\n│ general │ 50-100 │ 304363 │ 16184312 │ 53.174374 │ 50 │\n│ general │ 1000-5000 │ 246101 │ 460025242 │ 1869.253851 │ 1978 │\n│ general │ 10-50 │ 660787 │ 14411588 │ 21.809733 │ 25 │\n│ other │ 500-1000 │ 119 │ 62535 │ 525.504202 │ 500 │\n│ … │ … │ … │ … │ … │ … │\n└───────────────┴───────────────┴─────────────┴──────────────┴──────────────┴───────────────┘\n\n```\n:::\n:::\n\n\nOK, now let's do our timings.\n\nOne interesting thing to pay attention to here is the execution time for the following\ngroupby. Before, we could get away with lazy execution: because we only wanted to preview\nthe first few rows, we only had to compute the first few rows, so all our previews were\nvery fast.\n\nBut now, as soon as we do a groupby, we have to actually go through the whole dataset\nin order to compute the aggregate per group. So this is going to be slower. BUT,\nduckdb is still quite fast. It only takes milliseconds to groupby-agg all 20 million rows!\n\n::: {#32424707 .cell execution_count=21}\n``` {.python .cell-code}\n%timeit summary_by(subset, [\"election_type\", \"amount_bucket\"]).execute() # .execute() so we actually fetch the data\n```\n\n::: {.cell-output .cell-output-stdout}\n```\n161 ms ± 4.75 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)\n```\n:::\n:::\n\n\nNow let's try the same thing in pandas:\n\n::: {#cc653b7f .cell execution_count=22}\n``` {.python .cell-code}\n%timeit summary_by_pandas(pandas_subset, [\"election_type\", \"amount_bucket\"])\n```\n\n::: {.cell-output .cell-output-stdout}\n```\n2.19 s ± 6.54 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)\n```\n:::\n:::\n\n\nIt takes about 4 seconds, which is about 10 times slower than duckdb.\nAt this scale, it again doesn't matter,\nbut you could imagine with a dataset much larger than this, it would matter.\n\nLet's also think about memory usage:\n\n::: {#c967896c .cell execution_count=23}\n``` {.python .cell-code}\npandas_subset.memory_usage(deep=True).sum() / 1e9 # GB\n```\n\n::: {.cell-output .cell-output-display execution_count=36}\n```\n2.782586667\n```\n:::\n:::\n\n\nThe source dataframe is couple gigabytes, so probably during the groupby,\nthe peak memory usage is going to be a bit higher than this. You could use a profiler\nsuch as [FIL](https://github.com/pythonspeed/filprofiler) if you wanted an exact number,\nI was too lazy to use that here.\n\nAgain, this works on my laptop at this dataset size, but much larger than this and I'd\nstart having problems. Duckdb on the other hand is designed around working out of core\nso it should scale to datasets into the hundreds of gigabytes, much larger than your\ncomputer's RAM.\n\n### Back to analysis\n\nOK, let's plot the result of that groupby.\n\nSurprise! (Or maybe not...) Most donations are small. But most of the money comes\nfrom donations larger than $1000.\n\nWell if that's the case, why do politicians spend so much time soliciting small\ndonations? One explanation is that they can use the number of donations\nas a marketing pitch, to show how popular they are, and thus how viable of a\ncandidate they are.\n\nThis also might explain whose interests are being served by our politicians.\n\n::: {#6808107a .cell execution_count=24}\n``` {.python .cell-code}\nimport altair as alt\n\n# Do some bookkeeping so the buckets are displayed smallest to largest on the charts\nbucket_col = alt.Column(\"amount_bucket:N\", sort=labels)\n\nn_by_bucket = (\n alt.Chart(by_type_and_bucket.execute())\n .mark_bar()\n .encode(\n x=bucket_col,\n y=\"n_donations:Q\",\n color=\"election_type:N\",\n )\n)\ntotal_by_bucket = (\n alt.Chart(by_type_and_bucket.execute())\n .mark_bar()\n .encode(\n x=bucket_col,\n y=\"total_amount:Q\",\n color=\"election_type:N\",\n )\n)\nn_by_bucket | total_by_bucket\n```\n\n::: {.cell-output .cell-output-display execution_count=37}\n```{=html}\n\n\n\n\n```\n:::\n:::\n\n\n### By election stage\n\nLet's look at how donations break down by election stage. Do people donate\ndifferently for primary elections vs general elections?\n\nLet's ignore everything but primary and general elections, since they are the\nmost common, and arguably the most important.\n\n::: {#8a758b63 .cell execution_count=25}\n``` {.python .cell-code}\ngb2 = by_type_and_bucket[_.election_type.isin((\"primary\", \"general\"))]\nn_donations_per_election_type = _.n_donations.sum().over(group_by=\"election_type\")\nfrac = _.n_donations / n_donations_per_election_type\ngb2 = gb2.mutate(frac_n_donations_per_election_type=frac)\ngb2\n```\n\n::: {.cell-output .cell-output-display execution_count=38}\n```{=html}\n
┏━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━┳━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓\n┃ election_type ┃ amount_bucket ┃ n_donations ┃ total_amount ┃ mean_amount ┃ median_amount ┃ frac_n_donations_per_election_type ┃\n┡━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━╇━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┩\n│ string │ string │ int64 │ int64 │ float64 │ int64 │ float64 │\n├───────────────┼───────────────┼─────────────┼──────────────┼──────────────┼───────────────┼────────────────────────────────────┤\n│ general │ <10 │ 115873 │ 536742 │ 4.632158 │ 5 │ 0.052544 │\n│ general │ 50-100 │ 304363 │ 16184312 │ 53.174374 │ 50 │ 0.138017 │\n│ general │ 1000-5000 │ 246101 │ 460025242 │ 1869.253851 │ 1961 │ 0.111598 │\n│ general │ 10-50 │ 660787 │ 14411588 │ 21.809733 │ 25 │ 0.299642 │\n│ general │ 100-500 │ 700821 │ 123174568 │ 175.757530 │ 150 │ 0.317796 │\n│ general │ 500-1000 │ 174182 │ 91015697 │ 522.532162 │ 500 │ 0.078985 │\n│ general │ 5000+ │ 3125 │ 44496373 │ 14238.839360 │ 7601 │ 0.001417 │\n│ primary │ 5000+ │ 44085 │ 1558371116 │ 35349.237065 │ 10000 │ 0.002422 │\n│ primary │ 100-500 │ 3636287 │ 637353634 │ 175.275943 │ 150 │ 0.199765 │\n│ primary │ 500-1000 │ 634677 │ 334630687 │ 527.245649 │ 500 │ 0.034867 │\n│ … │ … │ … │ … │ … │ … │ … │\n└───────────────┴───────────────┴─────────────┴──────────────┴──────────────┴───────────────┴────────────────────────────────────┘\n\n```\n:::\n:::\n\n\nIt looks like primary elections get a larger proportion of small donations.\n\n::: {#30710ce2 .cell execution_count=26}\n``` {.python .cell-code}\nalt.Chart(gb2.execute()).mark_bar().encode(\n x=\"election_type:O\",\n y=\"frac_n_donations_per_election_type:Q\",\n color=bucket_col,\n)\n```\n\n::: {.cell-output .cell-output-display execution_count=39}\n```{=html}\n\n\n\n\n```\n:::\n:::\n\n\n### By recipient\n\nLet's look at the top players. Who gets the most donations?\n\nFar and away it is ActBlue, which acts as a conduit for donations to Democratic\ninterests.\n\nBeto O'Rourke is the top individual politician, hats off to him!\n\n::: {#97c0a2c8 .cell execution_count=27}\n``` {.python .cell-code}\nby_recip = summary_by(featured, \"CMTE_NM\")\nby_recip\n```\n\n::: {.cell-output .cell-output-display execution_count=40}\n```{=html}\n
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━┳━━━━━━━━━━━━━━┳━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━┓\n┃ CMTE_NM ┃ n_donations ┃ total_amount ┃ mean_amount ┃ median_amount ┃\n┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━╇━━━━━━━━━━━━━━╇━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━┩\n│ string │ int64 │ int64 │ float64 │ int64 │\n├──────────────────────────────────────────────────────────────────┼─────────────┼──────────────┼─────────────┼───────────────┤\n│ INDIANA DENTAL PAC │ 111 │ 62236 │ 560.684685 │ 410 │\n│ BEAM SUNTORY INC POLITICAL ACTION COMMITTEE │ 407 │ 64806 │ 159.228501 │ 65 │\n│ AMEDISYS, INC. POLITICAL ACTION COMMITTEE │ 132 │ 25000 │ 189.393939 │ 75 │\n│ PIEDMONT TRIAD ANESTHESIA P A FEDERAL PAC │ 132 │ 90375 │ 684.659091 │ 600 │\n│ AHOLD DELHAIZE USA, INC POLITICAL ACTION COMMITTEE │ 369 │ 48062 │ 130.249322 │ 100 │\n│ DIMITRI FOR CONGRESS │ 87 │ 34719 │ 399.068966 │ 250 │\n│ RELX INC. POLITICAL ACTION COMMITTEE │ 5491 │ 306908 │ 55.892916 │ 34 │\n│ MAKING INVESTMENTS MAJORITY INSURED PAC │ 14 │ 30600 │ 2185.714286 │ 1000 │\n│ AMERICAN ACADEMY OF OTOLARYNGOLOGY-HEAD AND NECK SURGERY ENT PAC │ 765 │ 285756 │ 373.537255 │ 365 │\n│ MIMI WALTERS VICTORY FUND │ 840 │ 2514824 │ 2993.838095 │ 2506 │\n│ … │ … │ … │ … │ … │\n└──────────────────────────────────────────────────────────────────┴─────────────┴──────────────┴─────────────┴───────────────┘\n\n```\n:::\n:::\n\n\n::: {#56418e6e .cell execution_count=28}\n``` {.python .cell-code}\ntop_recip = by_recip.order_by(ibis.desc(\"n_donations\")).head(10)\nalt.Chart(top_recip.execute()).mark_bar().encode(\n x=alt.X(\"CMTE_NM:O\", sort=\"-y\"),\n y=\"n_donations:Q\",\n)\n```\n\n::: {.cell-output .cell-output-display execution_count=41}\n```{=html}\n\n\n\n\n```\n:::\n:::\n\n\n### By Location\n\nWhere are the largest donations coming from?\n\n::: {#55b19fc3 .cell execution_count=29}\n``` {.python .cell-code}\nf2 = featured.mutate(loc=_.CITY + \", \" + _.STATE).drop(\"CITY\", \"STATE\")\nby_loc = summary_by(f2, \"loc\")\n# Drop the places with a small number of donations so we're\n# resistant to outliers for the mean\nby_loc = by_loc[_.n_donations > 1000]\nby_loc\n```\n\n::: {.cell-output .cell-output-display execution_count=42}\n```{=html}\n
┏━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━┳━━━━━━━━━━━━━━┳━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━┓\n┃ loc ┃ n_donations ┃ total_amount ┃ mean_amount ┃ median_amount ┃\n┡━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━╇━━━━━━━━━━━━━━╇━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━┩\n│ string │ int64 │ int64 │ float64 │ int64 │\n├─────────────────┼─────────────┼──────────────┼─────────────┼───────────────┤\n│ NAZARETH, PA │ 1460 │ 138710 │ 95.006849 │ 38 │\n│ FULSHEAR, TX │ 1504 │ 346778 │ 230.570479 │ 50 │\n│ GLOUCESTER, MA │ 4956 │ 563331 │ 113.666465 │ 25 │\n│ NORMAN, OK │ 6195 │ 945333 │ 152.596126 │ 35 │\n│ OAK PARK, IL │ 12017 │ 3413138 │ 284.025797 │ 39 │\n│ AUSTIN, TX │ 189865 │ 33315922 │ 175.471635 │ 38 │\n│ MIAMI BEACH, FL │ 12825 │ 10598453 │ 826.390097 │ 100 │\n│ SAN ANTONIO, TX │ 140529 │ 18925978 │ 134.676672 │ 35 │\n│ HAMBURG, NY │ 2322 │ 170254 │ 73.322136 │ 8 │\n│ PITTSBURGH, PA │ 74208 │ 14358578 │ 193.490971 │ 42 │\n│ … │ … │ … │ … │ … │\n└─────────────────┴─────────────┴──────────────┴─────────────┴───────────────┘\n\n```\n:::\n:::\n\n\n::: {#cc1697c5 .cell execution_count=30}\n``` {.python .cell-code}\ndef top_by(col):\n top = by_loc.order_by(ibis.desc(col)).head(10)\n return (\n alt.Chart(top.execute())\n .mark_bar()\n .encode(\n x=alt.X('loc:O', sort=\"-y\"),\n y=col,\n )\n )\n\n\ntop_by(\"n_donations\") | top_by(\"total_amount\") | top_by(\"mean_amount\") | top_by(\n \"median_amount\"\n)\n```\n\n::: {.cell-output .cell-output-display execution_count=43}\n```{=html}\n\n\n\n\n```\n:::\n:::\n\n\n### By month\n\nWhen do the donations come in?\n\n::: {#0d055d90 .cell execution_count=31}\n``` {.python .cell-code}\nby_month = summary_by(featured, _.date.month().name(\"month_int\"))\n# Sorta hacky, .substritute doesn't work to change dtypes (yet?)\n# so we cast to string and then do our mapping\nmonth_map = {\n \"1\": \"Jan\",\n \"2\": \"Feb\",\n \"3\": \"Mar\",\n \"4\": \"Apr\",\n \"5\": \"May\",\n \"6\": \"Jun\",\n \"7\": \"Jul\",\n \"8\": \"Aug\",\n \"9\": \"Sep\",\n \"10\": \"Oct\",\n \"11\": \"Nov\",\n \"12\": \"Dec\",\n}\nby_month = by_month.mutate(month_str=_.month_int.cast(str).substitute(month_map))\nby_month\n```\n\n::: {.cell-output .cell-output-display execution_count=44}\n```{=html}\n
┏━━━━━━━━━━━┳━━━━━━━━━━━━━┳━━━━━━━━━━━━━━┳━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━┳━━━━━━━━━━━┓\n┃ month_int ┃ n_donations ┃ total_amount ┃ mean_amount ┃ median_amount ┃ month_str ┃\n┡━━━━━━━━━━━╇━━━━━━━━━━━━━╇━━━━━━━━━━━━━━╇━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━╇━━━━━━━━━━━┩\n│ int32 │ int64 │ int64 │ float64 │ int64 │ string │\n├───────────┼─────────────┼──────────────┼─────────────┼───────────────┼───────────┤\n│ NULL │ 1514 │ 250297 │ 165.321664 │ 100 │ NULL │\n│ 1 │ 348979 │ 174837854 │ 500.998209 │ 124 │ Jan │\n│ 2 │ 581646 │ 255997655 │ 440.126219 │ 100 │ Feb │\n│ 3 │ 1042577 │ 430906797 │ 413.309326 │ 81 │ Mar │\n│ 4 │ 1088244 │ 299252692 │ 274.986760 │ 50 │ Apr │\n│ 5 │ 1374247 │ 387317192 │ 281.839576 │ 48 │ May │\n│ 6 │ 1667285 │ 465305247 │ 279.079610 │ 44 │ Jun │\n│ 7 │ 1607053 │ 320528605 │ 199.451172 │ 35 │ Jul │\n│ 8 │ 2023466 │ 473544182 │ 234.026261 │ 35 │ Aug │\n│ 9 │ 2583847 │ 697888624 │ 270.096729 │ 38 │ Sep │\n│ … │ … │ … │ … │ … │ … │\n└───────────┴─────────────┴──────────────┴─────────────┴───────────────┴───────────┘\n\n```\n:::\n:::\n\n\n::: {#7002ddb8 .cell execution_count=32}\n``` {.python .cell-code}\nmonths_in_order = list(month_map.values())\nalt.Chart(by_month.execute()).mark_bar().encode(\n x=alt.X(\"month_str:O\", sort=months_in_order),\n y=\"n_donations:Q\",\n)\n```\n\n::: {.cell-output .cell-output-display execution_count=45}\n```{=html}\n\n\n\n\n```\n:::\n:::\n\n\n## Conclusion\n\nThanks for following along! I hope you've learned something about Ibis, and\nmaybe even about campaign finance.\n\nIbis is a great tool for exploring data. I now find myself reaching for it\nwhen in the past I would have reached for pandas.\n\nSome of the highlights for me:\n\n- Fast, lazy execution, a great display format, and good type hinting/editor support for a great REPL experience.\n- Very well thought-out API and semantics (e.g. `isinstance(val, NumericValue)`?? That's beautiful!)\n- Fast and fairly complete string support, since I work with a lot of text data.\n- Extremely responsive maintainers. Sometimes I've submitted multiple feature requests and bug reports in a single day, and a PR has been merged by the next day.\n- Escape hatch to SQL. I didn't have to use that here, but if something isn't supported, you can always fall back to SQL.\n\nCheck out [The Ibis Website](https://ibis-project.org/) for more information.\n\n", "supporting": [ - "index_files/figure-html" + "index_files" ], "filters": [], "includes": { "include-in-header": [ - "\n\n\n" + "\n\n\n" ] } } diff --git a/docs/_quarto.yml b/docs/_quarto.yml index cbe963e1e64d..3a527c34ff04 100644 --- a/docs/_quarto.yml +++ b/docs/_quarto.yml @@ -298,10 +298,6 @@ quartodoc: - name: param dynamic: true signature_name: full - - name: NA - # Ideally exposed under `ibis` but that doesn't seem to work?? - package: ibis.expr.api - signature_name: full - name: "null" dynamic: true signature_name: full diff --git a/docs/posts/campaign-finance/index.qmd b/docs/posts/campaign-finance/index.qmd index 3d8d9fc19330..a2a0a287e388 100644 --- a/docs/posts/campaign-finance/index.qmd +++ b/docs/posts/campaign-finance/index.qmd @@ -245,7 +245,7 @@ def get_election_type(pgi: StringValue) -> StringValue: "E": "recount", } first_letter = pgi[0] - return first_letter.substitute(election_types, else_=ibis.NA) + return first_letter.substitute(election_types, else_=ibis.null()) cleaned = cleaned.mutate(election_type=get_election_type(_.TRANSACTION_PGI)).drop( diff --git a/docs/tutorials/ibis-for-pandas-users.qmd b/docs/tutorials/ibis-for-pandas-users.qmd index 876fe4ac068f..e0fc2f5908e5 100644 --- a/docs/tutorials/ibis-for-pandas-users.qmd +++ b/docs/tutorials/ibis-for-pandas-users.qmd @@ -507,7 +507,7 @@ represented by `NaN`. This can be confusing when working with numeric data, since `NaN` is also a valid floating point value (along with `+/-inf`). In Ibis, we try to be more precise: All data types are nullable, and we use -`ibis.NA` to represent `NULL` values, and all datatypes have a `.isnull()` method. +`ibis.null()` to represent `NULL` values, and all datatypes have a `.isnull()` method. For floating point values, we use different values for `NaN` and `+/-inf`, and there are the additional methods `.isnan()` and `.isinf()`. diff --git a/docs/tutorials/ibis-for-sql-users.qmd b/docs/tutorials/ibis-for-sql-users.qmd index 577f7b015111..534090bfce64 100644 --- a/docs/tutorials/ibis-for-sql-users.qmd +++ b/docs/tutorials/ibis-for-sql-users.qmd @@ -522,10 +522,10 @@ ibis.to_sql(expr) ### Using `NULL` in expressions -To use `NULL` in an expression, either use the special `ibis.NA` value: +To use `NULL` in an expression, use `ibis.null()` value: ```{python} -pos_two = (t.two > 0).ifelse(t.two, ibis.NA) +pos_two = (t.two > 0).ifelse(t.two, ibis.null()) expr = t.mutate(two_positive=pos_two) ibis.to_sql(expr) ``` diff --git a/ibis/__init__.py b/ibis/__init__.py index 0bf64c3c4c42..28924feb23e2 100644 --- a/ibis/__init__.py +++ b/ibis/__init__.py @@ -4,6 +4,9 @@ __version__ = "9.0.0" +import warnings +from typing import Any + from ibis import examples, util from ibis.backends import BaseBackend from ibis.common.exceptions import IbisError @@ -36,7 +39,7 @@ def __dir__() -> list[str]: return sorted(out) -def __getattr__(name: str) -> BaseBackend: +def load_backend(name: str) -> BaseBackend: """Load backends in a lazy way with `ibis.