diff --git a/README.md b/README.md index b544bc055b7f..8370dac9c4bb 100644 --- a/README.md +++ b/README.md @@ -19,8 +19,6 @@ There is no need to write special CUDA, OpenMP or custom threading code. Accelerator back-ends can be mixed within a device queue. The decision which accelerator back-end executes which kernel can be made at runtime. -The **alpaka** API is currently unstable (beta state). - The abstraction used is very similar to the CUDA grid-blocks-threads division strategy. Algorithms that should be parallelized have to be divided into a multi-dimensional grid consisting of small uniform work items. These functions are called kernels and are executed in parallel threads.