Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Batching writer.append steps #78

Open
DriesSmit opened this issue Oct 12, 2021 · 7 comments
Open

Batching writer.append steps #78

DriesSmit opened this issue Oct 12, 2021 · 7 comments

Comments

@DriesSmit
Copy link

DriesSmit commented Oct 12, 2021

Hey there. Is it possible to batch the writer.append command (e.g. extend) in Reverb? I am currently doing something like the following:

for exp in exps:
       self._writer.append(exp)

Where I loop over 400 steps with quite large observation sizes. I am also using Launchpad. It seems that this append is taking about 20% of the episode computation time. Is there something like writer.extend in Reverb or some other fancy method to maybe do only one write for a batch of steps for improved performance? Should I maybe try and just skip the writer.append and directly try and create trajectories from the concatenated episode steps? Thanks.

@acassirer
Copy link
Contributor

Hi,

The short answer is no, there is not. The slightly longer answer is that this is on our backlog and is likely to be implemented at some point. I can't give you an exact date though.

A final thing worth mentioning is that the legacy Writer has this functionality in append_sequence which, depending on the specifics of your use case, might solve your problems.

@DriesSmit
Copy link
Author

DriesSmit commented Oct 13, 2021

Thanks! Does the legacy writer include most of the functionality of the current writer? And can I possibly help contribute to a append_sequence for the latest writer.

@acassirer
Copy link
Contributor

The legacy writer can basically be viewed as a special case of the new writer. The constraint that it has is that the trajectories that you build span ALL of the columns in all of the steps.

That is:

client = reverb.Client(...)

with client.writer(5) as writer:
  for i in range(100):
    writer.append({'a': i, 'b': np.ones([100, 100]) * i})
    
    if i + 1 >= 5:
      writer.create_item('my_table', num_timesteps=5, priority=1.0)
      writer.flush()

Results in the same behaviour as:

client = reverb.Client(...)

with client.trajectory_writer(5) as writer:
  for i in range(100):
    writer.append({'a': i, 'b': np.ones([100, 100]) * i})
    
    if i + 1 >= 5:
      trajectory = {'a': writer.history['a'][-5:], 'b': writer.history['b'][-5:]}
      writer.create_item('my_table', priority=1.0, trajectory=trajectory)
      writer.flush()

But you could not replicate this behaviour using the old writer:

client = reverb.Client(...)

with client.trajectory_writer(5) as writer:
  for i in range(100):
    writer.append({'a': i, 'b': np.ones([100, 100]) * i})
    
    if i + 1 >= 5:
      trajectory = {'a': writer.history['a'][-5:], 'one_b': writer.history['b'][-5]}
      writer.create_item('my_table', priority=1.0, trajectory=trajectory)
      writer.flush()

Since it uses the five last a but only b from one of the steps.

@DriesSmit
Copy link
Author

DriesSmit commented Oct 14, 2021

Thanks for the code snippets! The legacy writer works great and solved the issue I had. I actually did not have to do the batching as the legacy writer seems to be way faster than the trajectory writer in my specific case. Is Reverb planning to keep supporting both writers as one seems to be more flexible and one faster?

@acassirer
Copy link
Contributor

That's great.

No, no further development will go into the legacy Writer. If there is a performance difference then we'll focus on improving the TrajectoryWriter instead.

I find it surprising that there is a large difference in performance since they share most of the implementation. The legacy writer simply wraps the new writer in order to not break existing use cases. If you are able to provide a minimal repro that show the difference in performance then we could try to address this differences and improve the new writer.

@fastturtle
Copy link
Collaborator

fastturtle commented Oct 25, 2021

@DriesSmit what kind of throughput/qps are you getting for inserting? We did see a drop in speed when we switched Acme's adders over to the TrajectoryWriter but have spent time optimizing the adders and the TrajectoryWriter so they are now the same for us. I would recommend taking a look at them, a couple of important findings:

  • TrajectoryWriter.history isn't free, cache these calls and make them as infrequently as possible.
  • Make as few calls to TrajectoryWriter.append as possible. We make use of partial_step=True in an extra append call in the NStepTransitionAdder, which simplifies our code but at a performance cost.

@AsadJeewa
Copy link

AsadJeewa commented Feb 28, 2022

I have been having the same issue. We are working on Mava https://github.com/instadeepai/Mava which uses Reverb for adders (similar to Acme). The trajectory writer seemed to be working slower than the legacy writer so I decided to do some benchmarking with dummy data and compare the writers directly, (isolated from Mava) by extending the Reverb tutorial code (https://gist.github.com/AsadJeewa/18ed6df8875a33c2cb7893738211a82d). It clearly shows that regardless of the number of episodes, sequence length, or other parameters, the trajectory writer is slower.

I have attached an example. The first graph shows cumulalitive time for write() calls. ep200_step1mil_seq2000_period100__cumtime

The second shows total execution time.
ep200_step1mil_seq2000_period100__tottime

I can share more results if required.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants