-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
add example4 - dmdarray adding #5
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When running in system with Intel GPU (devcloud in particular) results in an error:
uab8f7faf7d5cc8a0d0c8bf0d3553a43@idc-beta-batch-head-node:~/work/tutorial-haichangsi$ I_MPI_OFFLOAD=0 mpirun -n 4 ./build/src/example4
[1699012212.351400] [idc-beta-batch-head-node:1845409:0] ib_iface.c:1017 UCX ERROR ibv_create_cq(cqe=4096) failed: Cannot allocate memory : Please set max locked memory (ulimit -l) to 'unlimited' (current: 4096 kbytes)
Abort(1615247) on node 0 (rank 0 in comm 0): Fatal error in PMPI_Init: Other MPI error, error stack:
MPIR_Init_thread(176)........:
MPID_Init(1548)..............:
MPIDI_OFI_mpi_init_hook(1632):
create_vni_context(2208).....: OFI endpoint open failed (ofi_init.c:2208:create_vni_context:Input/output error)
(tested with locally installed IMPI 2021.11). Please verify in different system/config. Maybe setting of some I_MPI_* env vars will help.
|
||
namespace mhp = dr::mhp; | ||
using T = int; | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please add a descriptive comment, eg.
/* The example presents operation of ... The result is stored in ... */
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I thought about describing all new examples together in the README file
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So a brief summary, like
/* add content of two 2-d arrays and display the results */
The files have their names (not informing about the content, just exampleX), so as a reader of a tutorial, I would appreciate some additional textual sync between the description and the particular code.
|
||
mhp::distributed_mdarray<T, 2> a(extents2d); | ||
mhp::distributed_mdarray<T, 2> b(extents2d); | ||
mhp::distributed_mdarray<T, 2> c(extents2d); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please add a comment to lines 21-22, encouraging a tutorial user to change the initial content of arrays a & b.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
as we talked, I think it's because of your specific MPI configuration |
b134a09
to
01358d1
Compare
No description provided.