-
Notifications
You must be signed in to change notification settings - Fork 21
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
MRE using EonTI Certs #37
Comments
@shankari The EvseSecurity module has several parameters that point to the different certificate bundles. Each of these parameters has a default path which is where I am trying to copy the certs that Tara has just provided. Here are my questions: 1.) majority of the files that Tara provided are 2.) I might be getting mixed up with where these certs are supposed to be copied. We have the contract certs and the EVSE certs, but I'm not sure which bundle parameters are the correct ones. The
I've ruled out the |
Although not directed to me, I have some of the answers.
|
Ok, I've copied the both root certs to
I wanted to check if I perhaps missed a step, such as adding additional certs to maeve. I still haven't replaced any of the other certificates found in |
Sorry for the late response. What did you do with the leaf certs? EonTi didn't provide leaf cert for the CSMS (MaEVe in this case). So, if you don't have that, OCPP with TLS with EonTi certs will not work. Actually, I was originally doing what you are doing now. I am also waiting for the CSMS cert. |
As we were successful with the locally generated certificates, now we are going to test OCPP 2.0.1 with MaEVe using certificates that EonTi provided. I wanted to open a new issue and @shankari agreed to this here. But as this issue is created for the same purpose, lets go with this one. The solution was supposed to be straight forward. We just needed to copy the EonTi certificate to replace the locally generated ones in proper places. When I did that, I got this following error while establishing TLS between the OCPP server (MaEVe) and client (EVerest):
MaEVe gateway logs reported this:
Using wireshark, I got this:
When I checked if the certificate chain is valid, I got a positive result.
For CSMS client chain:
My suspicion was that the CN of the CSMS server cert not being |
As @shankari suggested, I tested if changing the CN in the CSMS server certificate creates the same error. So, I took the currently working cert bundle and just replaced the CSMS server cert-key where the CN was Similarly, the certificate chains were validated but ultimately caused the same error:
For CSMS server chain:
For CSMS client chain:
Here we are using the same chain for both the server and the client, but this is not an issue. We do the same thing for the working scenario. Ideally, they should be two different chains, like the ones provided by EonTi. |
@the-bay-kay, I am not sure if you need to use wireshark, but in case you do, there are many instructions on how to use wireshark to view traffic over a docker network (e.g. https://github.com/linuxserver/docker-wireshark or https://superuser.com/questions/1685605/how-to-capture-docker-container-traffic-using-wireshark). If you do end up using wireshark, particularly to decode the EXI messages (#44 (comment)), I would suggest that you change all the docker compose files to include wireshark by default and change the README to include instructions on how to access the wireshark desktop |
Hi all! Just wanted to ask some quick clarifying questions about the certificate replacement process, to make sure I'm on the right track reproducing & debugging this. As I understand it, I need to replace the following root certs: (all under
Are there any certs I'm missing that I should replace? (I've seen the CSMS_Leaf mentioned a few times in this thread, for example). And, to that end -- I should replace the entire chain, correct? Or is replacing the root sufficient? Assuming I've got the correct list of certs to replace, where exactly in the container (manager-1) should I find replace them? My guess is Finally -- where will I be placing the certs in the MaEVe container? I see there's a script to fetch certificates (link), but I haven't been able to find where the config directory is within the docker container. I'll go ahead and start experimenting with my current assumptions -- let me know if there's anything wrong in my understanding of this project! |
@the-bay-kay the list of certs and the locations they map to are in the demo script ( As we discussed:
|
Correct. The "underlying transport error" is what we need to fix. The OCSP error happens in both situations and can be ignored (at least for now).
Interesting. Obviously, I would assume that this should not work. Would be good to understand what is validating the cert/token if the connection to the CSMS is broken. Is the station doing some local validation? That may or may not be incorrect - we do want the station to be robust to network failure, and sometimes charging providers allow free sessions if the network is not available because it is their problem rather than the customer's problem. But if they don't allow free access (which is part of the ISO 15118 spec), then the charging shouldn't work. BTW, tests like this would have been part of the adversarial PKI testing event in April that switched to a virtual format... |
Tracing through from this error... log_on_fail seems to be getting called on_fail_plain, the fail handler for the client connection_pointer set in |
Enabled debug logging in cat demo-iso-15118-2-ac-plus-ocpp.sh | bash -s - -r $(pwd) -e 2>&1 | temp.log Searching for the same error (while ignoring the erroneous characters...)
Zooming into the new debug logs, and cleaning them up...
So, we see a few outputs from getAuthorizationHeader() here. Time to trace back and see what calls this - I'm unsure how helpful this trail will be, but it's a start! |
I should've scrolled a bit further down -- I'm not sure the logs above were the relevant bits. Looking at the logs after the failure... Unfiltered Logs
Zooming in a bit on the security calls (and again, cleaning them up) .... EVSE Security Logs
We finally narrow in on...
I can't help but wonder, is this |
(An Aside: For those new-ish to Charging Infrastructure Certification, as I was when starting this project -- I found this article particularly helpful when trying to understand the methodology / justification behind Plug-and-Charge network authentication!) |
We are not replacing it in the demo script, demo script works => it is not used. Feel free to remove it it completely, test with the original demo certs and verify that it works. |
Yup, you were right. Deleting
Gives the logs below, and runs as expected (searching for No Leaf Key, Profile 2
|
Quick update -- Using the lice comb to find this bug, attempting to trace through the code and understand what's happening. I'll be building a call-stack diagram, just for my own understanding (added under the fold). Looking at the following log block surrounding the error... We know that the error occurs after WebsocketTLS::connect is called, as the reconnect callback is getting triggered. I've traced as far as WebsocketTLS::connect_tls. We can assume either this, or whatever calls the original connect, is encapsulating the error. Let's say it's the latter, for the purpose of sketching our call stack. We'll look for where this is being called later -- for now, let's build our way up from the root of the tree. At the bottom of our tree, Evse::get_verify_file which eventually calls X509CertificateBundle::get_certficate_hierarchy, which in turn calls X509CertificateHierarchy::build_hierarchy. We can rule out this block as the problem child, as we see none of the error logs thrown during these calls (and, indeed, the x509 verification happens, per the DEBG logs earlier in this thread) I think we can rule out the current EVSE leaf functions -- we would expect logs if any of them were to reach a fail state. Will continue tracing, and update with any findings! |
My current hunch is, we can rule out everything under Edit: indeed, we can start on 281, as the debug logs indicate we get this far... |
Comparing the working and broken debug logs... Normal Profile 2
Profile 2 + Eonti
There seems to be no discernible differences we can glean from the logs -- the |
Briefly reading up on Asio and WebSocket++'s implementation of the asio_client, just to better understand what's happening when |
Fishing for leads, pivoting to trying Wireshark. I've had some issues with docker-wireshark -- perhaps it's my lack of familiarity with the software, but I haven't been able to monitor any container traffic (see screenshots under fold). Going to spend a bit of time with Edgeshark, to weigh my options Docker Wireshark ScreenshotsDocker WiresharkAfter some exploration, I haven't been able to capture any traffic. Am I missing something in the GUI / Setup? I've followed the docker-compose setup pretty closely... Regular WiresharkCompare the above to regular wireshark, running on the same Windows Host -- lots of opportunities to capture. EDIT: Well, that was wonderfully painless! Setting up Edgeshark seems to be as simple as running it in a parallel container. Perhaps that was my issue with the linuxserver docker-wireshark -- should it have been a parallel container, rather than an internal one? Either way, I'll stick with edgeshark, since it seems to be working (video below). Edgeshark RunningScreen.Recording.2024-06-12.at.1.mp4 |
Added Edgeshark and execution details in PR #60 -- will keep the discussion of certificate investigation in this thread, as it contains all of the background information. |
So far, I've been unable to capture this alert. I'll absolutely chalk this up to my lack of experience with the software -- just wanted to give some updates on my methods, in case I'm barking up the wrong tree. Currently, I'm capturing the virtual bridges between for Maeve-CSMS and Everest-Demo, and filtering for |
Taking a step back to carefully go through the certs provided, some general thoughts on the cert replacement: Could a mismatch in the MO Root CN be causing issues? Our current MO has a Subject Name CN of Admittedly, I'm somewhat hazy on where in the code this chain would be evaluated. I understand the maeve setup -- we replace the other the certs with those generated by the makefile, and replace the MO Root in the config, using the patch to update said config. We may not be transferring the other MO certs, but shouldn't we be replacing them? Will spend some time testing now. |
Another point of confusion -- within the given certs, the only CSMS Leaf cert that is viable (e.g., that has a CN of |
Opening an issue for discussing the creation of an MRE that use EonTI certificates instead of the current self-signed certificates.
The text was updated successfully, but these errors were encountered: