Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

refine murmurhash3_x64_128 for bloom_filter #20996

Merged
merged 1 commit into from
Nov 5, 2019
Merged

refine murmurhash3_x64_128 for bloom_filter #20996

merged 1 commit into from
Nov 5, 2019

Conversation

luotao1
Copy link
Contributor

@luotao1 luotao1 commented Nov 4, 2019

  1. refine murmurhash3_x64_128 based on Optimize the performance of PyramidDNN on CPU benchmark#151 (comment)

results on pyramid_dnn training on E5-2620 v3.

before this PR after this PR speedup
185.18 s/epoch 182.4 s/epoch 1.5%
  1. do some code clean

Copy link
Contributor

@Aurelius84 Aurelius84 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@luotao1 luotao1 merged commit 25ffa84 into PaddlePaddle:develop Nov 5, 2019
@luotao1 luotao1 deleted the pyramid_hash_speedup branch November 5, 2019 02:03
seiriosPlus pushed a commit to seiriosPlus/Paddle that referenced this pull request Dec 9, 2019
seiriosPlus pushed a commit to seiriosPlus/Paddle that referenced this pull request Dec 9, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants