Skip to content

Commit

Permalink
Update examples and add demos for tf-shell features.
Browse files Browse the repository at this point in the history
  • Loading branch information
james-choncholas committed Oct 30, 2024
1 parent 19c1d4e commit d546e7d
Show file tree
Hide file tree
Showing 9 changed files with 698 additions and 1,638 deletions.
162 changes: 162 additions & 0 deletions examples/automatic_parameters_demo.ipynb
Original file line number Diff line number Diff line change
@@ -0,0 +1,162 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Automatic HE Parameter Selection\n",
"\n",
"This notebook demonstrates how to use `tf-shell` to automatically choose low\n",
"level parameters (like the plaintext modulus and ciphertext moduli) for the BGV\n",
"HE scheme. While these parameters can be chosen manually, as shown in other\n",
"examples, it is convenient to let `tf-shell` choose them.\n",
"\n",
"Since the HE parameters depend on the depth of a computation, they must be\n",
"fixed before the computation starts. `tf-shell` does this by extending\n",
"TensorFlow's graph compiler, grappler, with some convenient\n",
"homomorphic-encryption (HE) specific features, one of which is automatic\n",
"parameter selection.\n",
"\n",
"As such, automatic parameter selection is only available when using TensorFlow's\n",
"deferred execution mode (graph mode). This way, the graph is available for\n",
"inspection (to estimate ciphertext noise growth) and modification (to inject\n",
"generated parameters) before it is executed."
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"2024-10-29 19:56:56.261116: I tensorflow/core/util/port.cc:153] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.\n",
"2024-10-29 19:56:56.287144: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.\n",
"To enable the following instructions: AVX2 AVX512F AVX512_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.\n"
]
}
],
"source": [
"import tensorflow as tf\n",
"import tf_shell"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [],
"source": [
"a = [1, 2, 3]\n",
"b = [4, 5, 6]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Here we define the function we'd like to compute. TensorFlow will first trace\n",
"this function (without executing it) to build a graph of the computation. Then,\n",
"during a graph compiler optimization pass, `tf-shell` will replace the\n",
"\"autocontext\" placeholder Op with parameters generated for this specific\n",
"computation based on statistical estimation of the noise growth and the initial\n",
"plaintext size.\n",
"\n",
"Note, the `create_autocontext64` function must be called from inside a\n",
"`tf.function` in order to execute in deferred mode."
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [],
"source": [
"@tf.function\n",
"def foo(cleartext_a, cleartext_b):\n",
" shell_context = tf_shell.create_autocontext64(\n",
" log2_cleartext_sz=4, # Maximum size of the cleartexts (including the scaling factor).\n",
" scaling_factor=1, # The scaling factor (analagous to fixed-point but not necessarily base 2).\n",
" noise_offset_log2=0, # Extra buffer for noise growth.\n",
" )\n",
" key = tf_shell.create_key64(shell_context)\n",
" a = tf_shell.to_encrypted(cleartext_a, key, shell_context)\n",
" b = tf_shell.to_shell_plaintext(cleartext_b, shell_context)\n",
"\n",
" intermediate = a * b\n",
" result = tf_shell.to_tensorflow(intermediate, key)\n",
" return result"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Selected BGV parameters:\n",
"log_n: 11\n",
"t: 12289 (14 bits, min:4)\n",
"qs: 70371484213249 (47 bits, min:47 = t:14 + noise:33 + offset:0)\n",
"INFO: Generating key\n",
"tf.Tensor([ 4 10 18 ... 0 0 0], shape=(2048,), dtype=int32)\n"
]
}
],
"source": [
"tf_shell.enable_optimization() # Enable the autoparameter graph optimization pass.\n",
"\n",
"a = [1, 2, 3]\n",
"b = [4, 5, 6]\n",
"c = foo(a, b)\n",
"\n",
"print(c)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"`tf-shell` selected a plaintext modulus `t` to be at least 4 bits, and ciphertext\n",
"modulus `Q` as a produt of smaller moduli `qs` for representation of ciphertexts\n",
"in RNS (residual number system). `Q` is chosen to be large enough to support\n",
"noise growth in the computation without overflowing. Since this computation is\n",
"small, only one ciphertext modulus is needed.\n",
"\n",
"Note that `tf-shell` treats the first dimension of data as the packing dimension\n",
"of the BGV scheme (the slotting dimension). When the function is first traced,\n",
"the size of this dimension is unknown, because the ring degree of the\n",
"ciphertexts has not been chosen yet as it depends on `Q`, which depends on the\n",
"estimated noise growth. In the example above, the three elements of the input\n",
"vectors are packed into this first dimension for efficiency purposes. The\n",
"remaining slots in the ciphertexts went unused."
]
}
],
"metadata": {
"kernelspec": {
"display_name": ".venv",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.12"
}
},
"nbformat": 4,
"nbformat_minor": 2
}
40 changes: 23 additions & 17 deletions examples/benchmark.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -4,10 +4,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# Introduction to tf-shell\n",
"\n",
"To get started, `pip install tf-shell`. tf-shell has a few modules, the one used\n",
"in this notebook is `tf_shell`."
"# Benchmarking tf-shell"
]
},
{
Expand All @@ -19,10 +16,19 @@
"name": "stderr",
"output_type": "stream",
"text": [
"2024-09-12 15:46:51.029615: I tensorflow/core/util/port.cc:113] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.\n",
"2024-09-12 15:46:51.173136: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.\n",
"2024-10-29 21:41:36.488386: I tensorflow/core/util/port.cc:153] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.\n",
"2024-10-29 21:41:36.514318: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.\n",
"To enable the following instructions: AVX2 AVX512F AVX512_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"INFO: Generating key\n",
"INFO: Generating rotation key\n",
"INFO: Generating rotation key\n"
]
}
],
"source": [
Expand Down Expand Up @@ -58,7 +64,7 @@
"name": "stdout",
"output_type": "stream",
"text": [
"0.5914442079999844\n"
"0.5060953950014664\n"
]
}
],
Expand All @@ -79,7 +85,7 @@
"name": "stdout",
"output_type": "stream",
"text": [
"1.9706441910000194\n"
"0.17610475800029235\n"
]
}
],
Expand All @@ -100,7 +106,7 @@
"name": "stdout",
"output_type": "stream",
"text": [
"0.5826959020000686\n"
"0.5579067959988606\n"
]
}
],
Expand All @@ -121,7 +127,7 @@
"name": "stdout",
"output_type": "stream",
"text": [
"1.0182888270001058\n"
"0.7779848270001821\n"
]
}
],
Expand All @@ -142,7 +148,7 @@
"name": "stdout",
"output_type": "stream",
"text": [
"1.1156545989999813\n"
"0.44140414300272823\n"
]
}
],
Expand All @@ -163,7 +169,7 @@
"name": "stdout",
"output_type": "stream",
"text": [
"1.2263275259999773\n"
"1.415064643999358\n"
]
}
],
Expand All @@ -184,7 +190,7 @@
"name": "stdout",
"output_type": "stream",
"text": [
"0.9413914479998766\n"
"0.8980931189980765\n"
]
}
],
Expand All @@ -205,7 +211,7 @@
"name": "stdout",
"output_type": "stream",
"text": [
"0.7464927729999999\n"
"0.7085658289979619\n"
]
}
],
Expand All @@ -226,7 +232,7 @@
"name": "stdout",
"output_type": "stream",
"text": [
"26.568641370000023\n"
"27.201639023998723\n"
]
}
],
Expand All @@ -247,7 +253,7 @@
"name": "stdout",
"output_type": "stream",
"text": [
"350.995303421\n"
"360.84974249600054\n"
]
}
],
Expand All @@ -268,7 +274,7 @@
"name": "stdout",
"output_type": "stream",
"text": [
"5.003688195999985\n"
"6.758062808999966\n"
]
}
],
Expand Down
Loading

0 comments on commit d546e7d

Please sign in to comment.