Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Logging Add Electrostatics - 2 #74

Closed
xinmeng2020 opened this issue Apr 12, 2021 · 3 comments
Closed

Logging Add Electrostatics - 2 #74

xinmeng2020 opened this issue Apr 12, 2021 · 3 comments
Assignees
Labels
Logging developing stage, log and note

Comments

@xinmeng2020
Copy link
Contributor

xinmeng2020 commented Apr 12, 2021

---------------continue from #72 ------------------

add the force and energy contribution from _q

  • add energy contribution
config.initial_energy = field_energy + kinetic_energy + bond_energy + angle_energy + field_q_energy
  • add force contribution in store_data function call
        store_data(
            ...
            field_forces + bond_forces + angle_forces + elec_forces, ##<------- 
            ...
            field_energy,
            field_q_energy, 
           ....
        )
    • an error:

      field_forces + bond_forces + angle_forces + elec_forces, ## <------
      ValueError: operands could not be broadcast together with shapes (10137,3) (10169,3)
      print out the shapes: (the total number of parties is 20336 )
      field_forces.shape (10137, 3)
      bond_forces.shape (10137, 3)
      elec_forces.shape (10169, 3)
      field_forces.shape (10199, 3)
      bond_forces.shape (10199, 3)
      elec_forces.shape (10167, 3)

    • I printed out the shapes before the domain_decomposition():

      field_forces.shape (10169, 3)
      bond_forces.shape (10169, 3)
      elec_forces.shape (10169, 3)
      field_forces.shape (10167, 3)
      bond_forces.shape (10167, 3)
      elec_forces.shape (10167, 3)

    • It seems that the elec_forces are not decomposed expectedly; Hi Morten, @mortele, any quick tip ? (I am wondering whether this is because the layout used in the calculations.)
          layouts = [pm.decompose(positions[types == t]) for t in range(config.n_types)]
          update_field( ...  layouts ... )
          compute_field_and_kinetic_energy (... layouts ...)
          compute_field_force(... layouts ...)
          
          if charges_flag: 
              layout_q = pm.decompose( positions )  #<------------
              update_field_force_energy_q( ... layout_q...)
    • Thanks for Morten's checking, the error is due to mistakenly set the charges_flag to be False and the elec_forces were not sent into domain_decomposition(); indeed should be careful about the declaration of variables and the dd operation
    • To save the trouble; now set the charges_flag to be true

dd inside MD LOOP

  • Duplicate the dd lines before the MD loop does not work; see the code below and error info
        if step != 0 and config.domain_decomposition:
            if np.mod(step, config.domain_decomposition) == 0:
                #print(positions.shape)
                #print(bond_forces.shape)
                #print(elec_forces.shape)
                #print(velocities.shape)
                positions = np.ascontiguousarray(positions)
                bond_forces = np.ascontiguousarray(bond_forces)
                angle_forces = np.ascontiguousarray(angle_forces)

                dd = domain_decomposition(
                    positions,
                    pm,
                    *args_in,
                    molecules=molecules if molecules_flag else None,
                    bonds=bonds if molecules_flag else None,
                    verbose=args.verbose,
                    comm=comm,
                )
               ##print('ERROR HERE ---- ')

**error information**:
> Traceback (most recent call last):
>  File "../../hymd/main.py", line 910, in <module>
>    dd = domain_decomposition(
>  File "/Users/lixinmeng/Desktop/working/md-scf/HyMD-2021/hymd/field.py", line 275, in domain_decomposition
>    return layout.exchange(positions, *args)
>  File "/Users/lixinmeng/miniconda3/envs/py38/lib/python3.8/site-packages/pmesh/domain.py", line 162, in exchange
>    data = pack_arrays(args)
>  File "/Users/lixinmeng/miniconda3/envs/py38/lib/python3.8/site-packages/pmesh/domain.py", line 73, in pack_arrays
>    raise ValueError('the shape of the data does not match across different columns.')
> ValueError: the shape of the data does not match across different columns.
  • AND we should not do it !!!; as the a PREDEFINED tuple args_in only contains the values of variables. The change of a variable will not take effect in the tuple. Hi, @mortele this is what I mentioned to your before, to explain, see the arg_in below:
def ff(a, b, c):
    return a*b*c, a+b+c
a = 10
b = 100
c = -1
arg_in = (a, b , c)
print( ff(*arg_in) ) ## (-1000, 109) 
c = 1
print( ff(*arg_in) ) ## (-1000, 109); update of c fails 
  • To directly to solve this, I construct the args_in again, before calling the domain_decomposition() function
    • it is possible to take use the exec trick again to tackle the domain_decomposition() function -> to make the args_in pass-by-reference
               args_in = [
                     velocities,
                     indices,
                     bond_forces,
                     angle_forces,
                     field_forces,
                     names, 
                     types
                ]
                if charges_flag: ## add charge related 
                    args_in.append(charges) 
                    args_in.append(elec_forces)

                dd = domain_decomposition(
                    positions,
                    pm,
                    *args_in,
                    molecules=molecules if molecules_flag else None,
                    bonds=bonds if molecules_flag else None,
                    verbose=args.verbose,
                    comm=comm,
                )
                exec(_cmd_receive_dd )
  • A remaining puzzle is that why we use the old (defined outside the MD loop) args_in gives the error:

ValueError('the shape of the data does not match across different columns.')

charges I/O in h5 files

  • output file, the charges are added without judgment
store_static(
 ...
charges,  #<--------- simple add charges 
...
)
  • inside the store_static,
    h5md.particles_group = h5md.file.create_group("/particles")
    h5md.all_particles = h5md.particles_group.create_group("all")
    mass = h5md.all_particles.create_dataset("mass", (config.n_particles,), dtype)
    mass[:].fill(config.mass)
    ### add  charge  <---------- 
    charge = h5md.all_particles.create_dataset('charge', (config.n_particles,) , dtype='float32')
    charge[indices]= charges 
@xinmeng2020 xinmeng2020 added the Logging developing stage, log and note label Apr 12, 2021
@xinmeng2020 xinmeng2020 self-assigned this Apr 12, 2021
@mortele
Copy link
Member

mortele commented Apr 14, 2021

Can you double check that the indices array is correctly distributed among the MPI ranks? There might be a bug in distribute_input(...) or domain_decomposition(...) that causes some particles to be represented in multiple MPI ranks or some particles to be missing and not handled by any MPI ranks.

Something like (didn't test this, you might have to iron out some typos or issues):

    rank_indices = indices
    receive_buffer = comm.gather(rank_indices, root=0)

    gathered_rank_indices = None
    if comm.Get_rank() == 0:
        gathered_rank_indices = np.concatenate(receive_buffer)
    concatenated_indices = comm.bcast(gathered_rank_indices, root=0)
    sorted_concatenated_indices = np.sort(concatenated_indices)

    assert (sorted_concatenated_indices == np.array(list(range(config.n_particles)), dtype=int)).all()

@xinmeng2020
Copy link
Contributor Author

xinmeng2020 commented Apr 14, 2021

Hi Morten, thanks, I have checked using the following code, and the indices seem normal

    rank_indices = indices
    receive_buffer = comm.gather(rank_indices, root=0)
    
    gathered_rank_indices = None
    if comm.Get_rank() == 0:
        print('here----- to check')
        gathered_rank_indices = np.concatenate(receive_buffer)
    concatenated_indices = comm.bcast(gathered_rank_indices, root=0)
    sorted_concatenated_indices = np.sort(concatenated_indices)
    
    np.testing.assert_array_equal(sorted_concatenated_indices, np.arange( config.n_particles, dtype=int  )) ##<--- check 

@xinmeng2020 xinmeng2020 added the help wanted Extra attention is needed label Apr 14, 2021
@xinmeng2020 xinmeng2020 removed the help wanted Extra attention is needed label Apr 21, 2021
@mortele
Copy link
Member

mortele commented Jan 7, 2022

This should be all taken care of now.

@mortele mortele closed this as completed Jan 7, 2022
hmcezar added a commit to hmcezar/HyMD that referenced this issue Oct 5, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Logging developing stage, log and note
Projects
None yet
Development

No branches or pull requests

2 participants