Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add example4 - dmdarray adding #5

Merged
merged 1 commit into from
Nov 13, 2023

Conversation

haichangsi
Copy link
Contributor

No description provided.

@haichangsi haichangsi added the enhancement New feature or request label Nov 3, 2023
@haichangsi haichangsi self-assigned this Nov 3, 2023
Copy link
Contributor

@mateuszpn mateuszpn left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When running in system with Intel GPU (devcloud in particular) results in an error:

uab8f7faf7d5cc8a0d0c8bf0d3553a43@idc-beta-batch-head-node:~/work/tutorial-haichangsi$ I_MPI_OFFLOAD=0 mpirun -n 4 ./build/src/example4
[1699012212.351400] [idc-beta-batch-head-node:1845409:0]        ib_iface.c:1017 UCX  ERROR ibv_create_cq(cqe=4096) failed: Cannot allocate memory : Please set max locked memory (ulimit -l) to 'unlimited' (current: 4096 kbytes)
Abort(1615247) on node 0 (rank 0 in comm 0): Fatal error in PMPI_Init: Other MPI error, error stack:
MPIR_Init_thread(176)........:
MPID_Init(1548)..............:
MPIDI_OFI_mpi_init_hook(1632):
create_vni_context(2208).....: OFI endpoint open failed (ofi_init.c:2208:create_vni_context:Input/output error)

(tested with locally installed IMPI 2021.11). Please verify in different system/config. Maybe setting of some I_MPI_* env vars will help.


namespace mhp = dr::mhp;
using T = int;

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please add a descriptive comment, eg.
/* The example presents operation of ... The result is stored in ... */

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I thought about describing all new examples together in the README file

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So a brief summary, like
/* add content of two 2-d arrays and display the results */
The files have their names (not informing about the content, just exampleX), so as a reader of a tutorial, I would appreciate some additional textual sync between the description and the particular code.


mhp::distributed_mdarray<T, 2> a(extents2d);
mhp::distributed_mdarray<T, 2> b(extents2d);
mhp::distributed_mdarray<T, 2> c(extents2d);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please add a comment to lines 21-22, encouraging a tutorial user to change the initial content of arrays a & b.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

@haichangsi
Copy link
Contributor Author

When running in system with Intel GPU (devcloud in particular) results in an error:

uab8f7faf7d5cc8a0d0c8bf0d3553a43@idc-beta-batch-head-node:~/work/tutorial-haichangsi$ I_MPI_OFFLOAD=0 mpirun -n 4 ./build/src/example4
[1699012212.351400] [idc-beta-batch-head-node:1845409:0]        ib_iface.c:1017 UCX  ERROR ibv_create_cq(cqe=4096) failed: Cannot allocate memory : Please set max locked memory (ulimit -l) to 'unlimited' (current: 4096 kbytes)
Abort(1615247) on node 0 (rank 0 in comm 0): Fatal error in PMPI_Init: Other MPI error, error stack:
MPIR_Init_thread(176)........:
MPID_Init(1548)..............:
MPIDI_OFI_mpi_init_hook(1632):
create_vni_context(2208).....: OFI endpoint open failed (ofi_init.c:2208:create_vni_context:Input/output error)

(tested with locally installed IMPI 2021.11). Please verify in different system/config. Maybe setting of some I_MPI_* env vars will help.

as we talked, I think it's because of your specific MPI configuration

@haichangsi haichangsi merged commit 2a0c52d into oneapi-src:main Nov 13, 2023
2 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging this pull request may close these issues.

add a tutorial example - adding 2 mdarrays
2 participants