From 431d7100f51e7c159d87f458f441dedd4f1495c2 Mon Sep 17 00:00:00 2001 From: Ziming Liu Date: Mon, 19 Aug 2024 23:00:29 -0400 Subject: [PATCH] Update README.md --- README.md | 52 +--------------------------------------------------- 1 file changed, 1 insertion(+), 51 deletions(-) diff --git a/README.md b/README.md index 8984965a..da30966a 100644 --- a/README.md +++ b/README.md @@ -1,61 +1,11 @@ kan_plot -# !! Major Updates on July 14, 2024 - -* `model.train()` has been changed to `model.fit()` -* Some other small features are changed (e.g., create_dataset has been moved to kan.utils). I have updated and checked the notebooks in `./tutorials` are runnable on CPUs, so please refer to those tutorials for updated/new functionalities. Documentation hasn't been updated yet but will be updated soon. - -For pypi users, this is the most recent version 0.2.1. - -New functionalities include (documentation later): -* including multiplications in KANs. [Tutorial](https://github.com/KindXiaoming/pykan/blob/master/tutorials/Interp_1_Hello%2C%20MultKAN.ipynb) -* the speed mode. Speed up your KAN using `model = model.speed()` if you never use the symbolic functionalities. [Tutorial](https://github.com/KindXiaoming/pykan/blob/master/tutorials/Example_2_speed_up.ipynb) -* Compiling symbolic formulas into KANs. [Tutorial](https://github.com/KindXiaoming/pykan/blob/master/tutorials/Interp_3_KAN_Compiler.ipynb) -* Feature attribution and pruning inputs. [Tutorial](https://github.com/KindXiaoming/pykan/blob/master/tutorials/Interp_4_feature_attribution.ipynb) - # Kolmogorov-Arnold Networks (KANs) -This is the github repo for the paper ["KAN: Kolmogorov-Arnold Networks"](https://arxiv.org/abs/2404.19756). Find the documentation [here](https://kindxiaoming.github.io/pykan/). Here's [author's note](https://github.com/KindXiaoming/pykan?tab=readme-ov-file#authors-note) responding to current hype of KANs. +This is the github repo for the paper ["KAN: Kolmogorov-Arnold Networks"](https://arxiv.org/abs/2404.19756) and ["KAN 2.0: Kolmogorov-Arnold Networks Meet Science"]([https://arxiv.org/abs/2404.19756](https://arxiv.org/abs/2408.10205). Find the documentation [here](https://kindxiaoming.github.io/pykan/). Kolmogorov-Arnold Networks (KANs) are promising alternatives of Multi-Layer Perceptrons (MLPs). KANs have strong mathematical foundations just like MLPs: MLPs are based on the universal approximation theorem, while KANs are based on Kolmogorov-Arnold representation theorem. KANs and MLPs are dual: KANs have activation functions on edges, while MLPs have activation functions on nodes. This simple change makes KANs better (sometimes much better!) than MLPs in terms of both model **accuracy** and **interpretability**. A quick intro of KANs [here](https://kindxiaoming.github.io/pykan/intro.html). -mlp_kan_compare - -## Accuracy -**KANs have faster scaling than MLPs. KANs have better accuracy than MLPs with fewer parameters.** - -Please set `torch.set_default_dtype(torch.float64)` if you want high precision. - -**Example 1: fitting symbolic formulas** -Screenshot 2024-04-30 at 10 55 30 - -**Example 2: fitting special functions** -Screenshot 2024-04-30 at 11 07 20 - -**Example 3: PDE solving** -Screenshot 2024-04-30 at 10 57 25 - -**Example 4: avoid catastrophic forgetting** -Screenshot 2024-04-30 at 11 04 36 - -## Interpretability -**KANs can be intuitively visualized. KANs offer interpretability and interactivity that MLPs cannot provide. We can use KANs to potentially discover new scientific laws.** - -**Example 1: Symbolic formulas** -Screenshot 2024-04-30 at 11 04 56 - -**Example 2: Discovering mathematical laws of knots** -Screenshot 2024-04-30 at 11 05 25 - -**Example 3: Discovering physical laws of Anderson localization** -Screenshot 2024-04-30 at 11 05 53 - -**Example 4: Training of a three-layer KAN** - -![kan_training_low_res](https://github.com/KindXiaoming/pykan/assets/23551623/e9f215c7-a393-46b9-8528-c906878f015e) - - - ## Installation Pykan can be installed via PyPI or directly from GitHub.