-
-
Notifications
You must be signed in to change notification settings - Fork 30
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
large file upload for OTA #126
Comments
Configure the HTTP server chunk size to a suitable size, then write each chunk in the response handler. The linked example doesn't handle this situation, but all you do is check Please note that #42 needs to be implemented if you want to use the MIMEParser. I'm not sure the suggested solution in that issue is the best one, I imagine it is possible to handle any sized data also when MIME-decoding. |
thank you for your fast answer. |
Heh, that's what I get for answering too qickly.... Yes, blob sends data to the browser. They're both instances of the same method though, just for different use-cases. |
After taking a closer look at the MIME parser I think it should be possible to implement the handling of samaller dada chunks. I could imagine that passing an additional parameter (e.g. max_chunk_size) to the parser could do the job. |
I'd prefer to not have a fixed limit if there is a way to do it. Having the callback handle chunks isn't different from what other parts do so I see no reason to avoid such a design. |
in order to keep the existing API of Smooth my suggestion would be to define another overloaded version of |
I'm not against breaking the API if it means an improvement to the ability to handle incoming data. Your solution will probably work, but the data being passed in the callback should probably be decoded already, otherwise HTTP-stuff leaks into the application layer. I'm not sure this is doable without first reading the entire data. Perhaps caching data to disk is an option to reduce memory usage while receiving? There still has to be some max size anyhow, but it can be much larger than if it is kept in memory only. Not sure what you're asking re |
having another overloaded version of |
Well...the mimeparser detects which mode it needs to operate in. It doesn't necessarily have to be handled by the same method. Re. overload or not. Once the framework supports chunked data via the mime-parser, there's really no use for the current implementation. And, considering that the current implementation is vulnerable to large data sizes, it really should be replaced. |
One more question: what exacly is the meaning of the variable
|
The headers are always fully read before the callback is called; hence the fixed max header size during setup of the HTTP server. If |
yesterday I worked on the MIMEParser. Now I have a first draft working but currently there is still one problem which I have to fix: the resulting file has an additional crlf at the end - but I think it should not be too difficult to fix. The upload speed I achieve is 63kByte/s with a direct connection between ESP32 and my PC - that is not really fast but probably o.k. for my purpose.
Do I miss anything? Do you want to check in advance the changes I made? |
Binary and non-binary files. 64kBytes/s is really slow. Have you determined where the choke point is? |
I want to try to implement the support for multi file upload first. Then I will do the testing according to what you suggested. But it will take some time because I am working on this during my free time - which is quite rare nowadays. (And additionally I am quite slow in coding...) |
Take your time, I'm happy to help if/when needed. 256 bytes? yeah, that might be causing slowdowns. |
Now I am able to upload multiple files in one step. Upload speed is optimized and now is approx. 150kBytes/s. Improvement in upload speed is due to an optimization in searching for boundaries in the incoming data; this shows me once more that very likely slow PSRAM probably is the major issue. |
That's a good improvement in speed!
Sounds like a job for a finite state machine. |
Now I implemented a state machine and there is only one (none) return statement in
probably due to the watchdog not being serviced. Do you have an idea how I can circumvent this?
using emplace_back() for each single byte seems to be a quite expensive approach. Isn't there a cheaper approach available for moving uint8_t[] data into a std::vector<uint8_t>; e.g. like memcpy()? I didn't find any. Do you have an idea? |
Unless you have tasks that run on on the idle-tick, the watchdog issues are nothing to worry about. At least that is what I've been told and I've never seen any issues re. this. The only way to prevent them is for other tasks to either kick the WD, or to yield processing time. Try FYI: A vector is guaranteed to be contiguous memory so it can be used as an array, but you'll have to call reserve() at the appropriate time. |
Now I tested the code under Windows and Linux with text and binary files using Firefox, Chrome and Edge. In all cases the uploads were without error. |
Do 1000 files work under Linux? 10000? A device crashing is never a good thing, what's does it print to the console? 0 bytes with std::copy? Sounds like you had the wrong arguments, I doubt it is broken... paste/link the code snippet? Feel free to do the PR whenever and I'll have look and do some testing too. |
Some update:
I need to analyze this a bit deeper. I will also check if the memory leak was not present with the previous version of the |
A memory leak should be easy to find if you run it under Linux with Valgrind. It'll even tell you where the leak is. If you do a PR I can have a look too when time permits. |
I now finished the work of porting the modifications for the MIMEParser into the last version of Smooth and I also adapted the http_server_test. Basically it works, but unfortunately I encounter the problem, that after the upload the final response only appears in the browser when I upload very small files (<4kB). This is due the fact, that |
Ok, let me know if you need something from me. |
I think this can be closed now. |
Hi,
currently I am trying to do a large file upload like in the http_server_test example. With this file I would like to do a ESP32-firmware update using OTA functionality. So, the file to upload has a size of approx 2,3MB. Unfortunately, the file seems to be loaded as whole and not in chunks so that my callback function is called only once for the whole file. For small files this works - but for lager files (approx. >1MB) the ESP32 crashes - probably due to memory limit.
What I would need to have is a functionality which reads chunks from the data stream and passes these chunks to my callback function in order to write to the partition which has to be flashed.
Do you have a hint how I can accomplish that?
Kind regards
The text was updated successfully, but these errors were encountered: