Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix benchmarks for Julia 1.0 / 0.7 #111

Open
hustf opened this issue Sep 2, 2018 · 8 comments
Open

Fix benchmarks for Julia 1.0 / 0.7 #111

hustf opened this issue Sep 2, 2018 · 8 comments

Comments

@hustf
Copy link
Collaborator

hustf commented Sep 2, 2018

  1. The benchmarks use depend on packages UnicodePlots and IndexedTables. IndexedTables is still not working on Julia 0.7, but the functionality can be referred from package Millboard instead.

  2. The benchmarks also use logutils from the logging folder in this package. It probably does not work due to the new broadcast behaviour. Consider referring another logging package.

  3. 'Open browsers' also needs some fixing for Julia 1.0. Consider including e.g. Blink.

  4. Optionally expand benchmarks with tests. Second-per-byte is probably not the most interesting benchmark. Message latency or messages per second for various browsers / client is more interesting. It would also be interesting to compare the performance of various methods of running /compiling/ calling javascript. Also, how to avoid memory allocation -> garbage collection. The latter probably belongs in the example folder.

@hustf
Copy link
Collaborator Author

hustf commented Sep 4, 2018

A couple of commits have been made to branch 'issue111'. Work-in-progress.

  1. Done.
  2. Partly working with deprecation warnings on Julia 0.7.
  3. Blink not implemented yet.
  4. Not started.

@hustf
Copy link
Collaborator Author

hustf commented Dec 6, 2018

There has been more changes since Julia 0.7 made to Logging.jl than I was aware of.
So the logging tests are failing, and this was unfinished code anyway.

The earlier benchmarks made use of logutils_ws.jl, so it seems a tidier approach to

  1. define Base.show and Base.print methods for proprietary types WebSocket, ServerWS. These can be included in WebSockets.jl.
  2. add a WebSocketLogger which use IOContext instead of dispatching on its own internal IO types.
    In that context, non-proprietary types like Request and Function can be shown in a format more
    suitable for debug logging of websocket applications.
  3. fix benchmarks when logging is properly in place. The unfinished code stays in branch 'issue 111'.

@hustf
Copy link
Collaborator Author

hustf commented Feb 20, 2019

As per the last post,

  1. Implemented and included in version 1.2.0.
  2. Implemented in PR Integrate WebSocketLogger #131 (not quite merged yet), basis for version 1.3.0
  3. To be done, but since WebSocketLogger has some possibly difficult hard-to-compile flexibility included, an ad-hoc benchmark is called for before merging.

@hustf
Copy link
Collaborator Author

hustf commented Feb 20, 2019

A full 'ad-hoc-benchmark' script. The results show no deterioration:

Tested with Julia v1.0.3, cygwin compiled with 8 cores, newer laptop.
Before running, first check out the PR and go to the examples folder. Version change in the middle of it.

using BenchmarkTools

include("count_with_logger.jl")
while !istaskdone(sertask)
	yield()
end
const se, ta = serve_task()
sleep(1)
elapsed_s1 = @belapsed with_logger(WebSocketLogger(stderr, WebSockets.Logging.Debug)) do
        WebSockets.open(coroutine_count, "ws://localhost:$PORT", subprotocol = "count")
    end

put!(se.in, "Close")
while !istaskdone(ta)
	yield()
end
println("***")
println("With pull request #131: ", elapsed_s1, " s")    # -> 0.21819 s

exit()
git checkout master
julia

#===

/examples/count_with_logger.jl adapted for WebSockets version 1.2.0

===#

using WebSockets
import WebSockets.Logging
using Logging
const COUNTTO = 10
const PORT = 8090
addsubproto("count")

httphandler(req::WebSockets.Request) = WebSockets.Response(200, "OK")

function coroutine_count(ws)
    @debug ("Enter coroutine, ", ws)
    success = true
    protocolfollowed = true
    counter = 0
    # Server sends 1
    if ws.server
        counter += 1
        if !writeguarded(ws, Array{UInt8}([counter]))
            @warn ws, " could not write first message"
            protocolfollowed = false
        end
    end


    while isopen(ws) && protocolfollowed && counter < COUNTTO && success
        protocolfollowed = false
        data, success = readguarded(ws)
        if success
            OUTDATA = data
            if length(data) == 1
                if data[1] == counter + 1
                    protocolfollowed = true
                    counter += 2
                    @debug ws, " Counter: ", counter
                    writeguarded(ws, Array{UInt8}([counter]))
                else
                    @error ws, " unexpected: ", counter
                end
            else
                protocolfollowed = false
                @warn (ws, " wrong message length: "), length(data)
                @debug "Data is type ", typeof(data)
                @debug data
            end
        end
    end
    if protocolfollowed
        @debug (ws, " finished counting to $COUNTTO, exiting")
    else
        @debug "Exiting ", ws, " Counter: ", counter
    end
end

function gatekeeper(req, ws)
    @debug req
    if subprotocol(req) != ""
        coroutine_count(ws)
    else
        @debug "No subprotocol"
    end
end

function serve_task(logger= ConsoleLogger(stderr, Base.CoreLogging.Debug))
    server = WebSockets.ServerWS(httphandler, gatekeeper)
    task = @async   with_logger(logger) do
        WebSockets.serve(server, port = PORT)
    end
    @info "http://localhost:$PORT"
    return server, task
end

server, sertask = serve_task()

clitask = @async with_logger(ConsoleLogger(stderr, WebSockets.Logging.Debug)) do
        WebSockets.open(coroutine_count, "ws://localhost:$PORT", subprotocol = "count")
    end
	
@async begin
    sleep(5)
    println("Time out 5 s, closing server")
    put!(server.in, "Close")
    nothing
end

#===
End of adapted example, now follows a similar time trial as in the beginning
===#
using BenchmarkTools
while !istaskdone(sertask)
	yield()
end
const se, ta = serve_task()
sleep(1)
elapsed_s2 = @belapsed with_logger(ConsoleLogger(stderr, Base.CoreLogging.Debug)) do
        WebSockets.open(coroutine_count, "ws://localhost:$PORT", subprotocol = "count")
    end

put!(se.in, "Close")
while !istaskdone(ta)
	yield()
end
println("***")
println("With WebSockets version 1.2.0: ", elapsed_s2, " s")    # ->  0.241720844 s

@hustf
Copy link
Collaborator Author

hustf commented Mar 21, 2019

Checking the above script at version 1.5.0 and Julia version 1.1 downloaded binary. The binary is perhaps slower, but I have difficulties compiling the latest Julia 1.1.

using BenchmarkTools

include("count_with_logger.jl")
while !istaskdone(sertask)
	yield()
end
const se, ta = serve_task()
sleep(1)
elapsed_s1 = @belapsed with_logger(WebSocketLogger(stderr, WebSockets.Logging.Debug)) do
        WebSockets.open(coroutine_count, "ws://localhost:$PORT", subprotocol = "count")
    end

put!(se.in, "Close")
while !istaskdone(ta)
	yield()
end
println("***")
println("With v1.5.0: ", elapsed_s1, " s")  

When running the windows binary from from a cygwin mintty v2.9.6 terminal : 0.26 s.
When running binary from from a cygwin mintty v2.9.9 terminal : 0.23 s.
When running the binary by double-clicking: 0.010168884 s
When running in Atom / Juno terminal: 0.010659732 s
When running in VS Code terminal: 0.009946798 s
When running in PowerShell terminal: 0.010115641 s

Apparently, there's an issue with mintty, GPUs and latency on Windows 10. This masks the small differences between versions of WebSockets.

@Moelf
Copy link

Moelf commented Aug 18, 2021

https://stackoverflow.com/questions/68827624/julia-websocket-slow-to-read-write

relevant performance issue, we're 2x slower

@hustf
Copy link
Collaborator Author

hustf commented Aug 18, 2021 via email

@Moelf
Copy link

Moelf commented Aug 18, 2021

by instinct this is about compile time.

why? It runs in a loop to test the time of write and read, many times, I don't think there's any compiler latency issue involved.

And it's not multi-threaded/process, it's not even async, because in python every async is immediately put inside a await

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants