-
Notifications
You must be signed in to change notification settings - Fork 14
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Merge pull request #12 from JuliaORNL/readme
Add README and fix Project.toml
- Loading branch information
Showing
2 changed files
with
38 additions
and
1 deletion.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,6 +1,6 @@ | ||
name = "JACC" | ||
uuid = "0979c8fe-16a4-4796-9b82-89a9f10403ea" | ||
authors = ["pedrovalerolara <valerolarap@ornl.com>", "williamfgc <[email protected]>"] | ||
authors = ["pedrovalerolara <valerolarap@ornl.gov>", "williamfgc <[email protected]>"] | ||
version = "0.0.1" | ||
|
||
[deps] | ||
|
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,2 +1,39 @@ | ||
# JACC.jl | ||
CPU/GPU performance portable layer for Julia | ||
|
||
JACC.jl follows a function as a argument approach in combination with the power of Julia's ecosystem for multiple dispatch, GPU back end access, and weak dependencies since Julia v1.9 . Similar to portable layers like Kokkos, users would pass a size and a function including its arguments to a `parallel_for` or `parallel_reduce` function. | ||
The overall goal is to write a single source code that can be executed in multiple CPU and GPU parallel programming environments. Following the principle for OpenACC, it's meant to simplify programming for heterogeneous CPU and GPU systems. | ||
|
||
JuliaCon 2023 presentation [video](https://live.juliacon.org/talk/AY8EUX). | ||
|
||
1. Set a back end: "cuda", "amdgpu", or "threads" (default) with `JACC.JACCPreferences` generating a `LocalPreferences.toml` file | ||
|
||
``` | ||
julia> import JACC.JACCPreferences | ||
julia> JACCPreferences.set_backend("cuda") | ||
``` | ||
|
||
2. Run a kernel example (see tests directory) | ||
|
||
``` | ||
import JACC | ||
function axpy(i, alpha, x, y) | ||
if i <= length(x) | ||
@inbounds x[i] += alpha * y[i] | ||
end | ||
end | ||
N = 10 | ||
# Generate random vectors x and y of length N for the interval [0, 100] | ||
x = round.(rand(Float32, N) * 100) | ||
y = round.(rand(Float32, N) * 100) | ||
alpha = 2.5 | ||
x_d = JACC.Array(x) | ||
y_d = JACC.Array(y) | ||
JACC.parallel_for(N, axpy, alpha, x_d, y_d) | ||
``` | ||
|
||
We currently have a limited number of configurations. | ||
We hope to study and incoorporate more relevant cases and dimensions shapes as needed. |