-
Notifications
You must be signed in to change notification settings - Fork 35
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Catch duplicate API key specified (plus scope creep) #226
base: main
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Most polite. No "WTAF?".
🤣🤣🤣 |
Long may the changes in the last two PRs hide dormant. Prepped for next anyways. |
We now meet (well, far exceed) the minimum requirements for becoming a core component. Unit testing of config_flow.py is must-ride. I worked out how to shoehorn pytests for a custom into the dev container environment, with minimal alterations required when transitioned as a core component... In "mounts": [
"source=${localEnv:HOME}/Documents/GitHub/ha-solcast-solar/custom_components/solcast_solar,target=${containerWorkspaceFolder}/config/custom_components/solcast_solar,type=bind",
"source=${localEnv:HOME}/Documents/GitHub/ha-solcast-solar/tests,target=${containerWorkspaceFolder}/tests/components/solcast_solar,type=bind", from
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice
OK, I'll try to have a look over the weekend. Very tired, now, so off to bed, but maybe we throw it at the wall and see if it sticks. Will read up on the process. |
Sounds like a journey, but I'm betting it's a shorter one than HACS at the mo'... We got this. |
I've removed the self-signed certs to stop GitGuardian complaining about them. The sim creates the cert if required on startup. |
The hard limit validation oopsie would have been caught by the unit test had I also looked for an expected positive outcome going wrong instead of expecting the worst from bad input. I did not. The unit test failed everyone. It sailed through without a care in the world. It does not now. |
So I am kind-of guessing re. #227. I am only led by the exception. The latest commit catches the very first exception regarding 'index' and then bluntly refuses to start the integration with an inferance of a corrupt I thought about doing something fancier, but fancier would have required a LOT more input as to the situation, and a LOT more testing. I am not in the mood for testing. And I have no input. So it is what it is for now. |
I am kind-of guessing that some ancient installs - possibly manual, non-HACS, have been upgraded, and they are the cause of the last few issues like this. |
Kind-of not, I'm tipping. The log indicated a usage cache load, with a count of two for API used, so must be a relatively recent version. Last reset was within twenty-four hours. This might have been from a prior start attempt, but there are two sites, so if anything API used should be four calls, given history load on fresh start. Unless the second site failed to load... I got nothin' but conjecture.
|
Ahh, I did squint at the log on my phone after I carefully picked up my specs case, left home, and discovered that I had left me specs at home, but didn't spot that. |
Further fun with unit tests. Just uncovered a bug: Turning off detailed half-hourly attribute with detailed hourly enabled results in a puff of virtual magic smoke.
|
Thanks for all your amazing efforts, @autoSteve - I would have just been picking up pieces from the floor every time it broke without you. I just checked my logs - no 429s, but I did have a cache reset stale. That said, it could be because I've now got watchtower somewhat aggressively updating images, and it did update homeassistant so that might be the cause - but I only notices because I just went looking for 429s:
Enjoy your festivus however you choose to. I'm just finishing up writing a PIR because there is no rest for the wicked. |
Also remove the now redundant test_solcastapi.py
Caches during testing get created at I am still on the hunt for code that is not exercised. The current set of tests hit heaps of code paths (excepting Solcast API failure), and where specific Me like. |
The pytest coverage report is your friend to find un-exercised code, BTW.
|
Is it reasonable to expect to get to 100% coverage? Or even "expected" in terms of form? For example, I wouldn't have thought that 3-13 in system_health.py needed any testing coverage, nor 29-30 (assuming 28 does) or 35 (assuming 34 does) in energy.py. As I couldn't see recorder.py at all in the branch on github, I assume that yet isn't committed 🤣 |
Lols. That's right... I got rid of it. It's still on me dev repo. 😂 It is reasonable to have good coverage because it saves you and I and others testing time.
An assigned quality scale is a technicality. My aim is to address the glaring gaps that represent a higher risk, and improve the coverage over time. A good example of this is that our sensors are rigged to return zero should things turn to custard, which is not the way things should be. We have both seen this reported many times. "It broke and now all my values are zero!" The sensors should instead show as 'unavailable' if a value cannot be returned. Testing so far has highlighted this by not being able to test certain exception conditions because the test method I tried blows up because the code "works", but is dodgy from an HA POV. |
Yeah, I sort of get that, but at the same time, I'm not sure that I agree, completely. I sort of prefer a boolean sensor / diagnostic which can be used to know if all is good or not. I guess it depends on what the sensor is. But having said all that, I'm neither a python nor a home assistant style expert. However, this is the side effect of a sensor becoming unavailable instead of an empty string or a zero count on the warnings entity: Code that attempts to access {% if (states("sensor.st_kilda_west_warnings") not in ["unavailable", "none", "unknown"]) and (states("sensor.st_kilda_west_warnings") and ((state_attr("sensor.st_kilda_west_warnings", "warnings") | count) < 1 )) %}
{{"No Warnings"}}
{% else %}
{{ state_attr("sensor.st_kilda_west_warnings" , "warnings") [0].short_title }}
{% endif %} |
This is pretty satisfying. Near 100% code test coverage. So to-do is improve the scenarios over time instead if there are gaps. I'm pretty sure there are very few gaps now... There are two lines that fail to be exercised. These relate to sensors never becoming There are many I also needed to exclude code that is substituted under testing conditions. Things like getting the current date/time, and the forecast fetch itself because no
|
While test coverage may now be 💯% without "cheating" with Test scenarios have been bolstered as well.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LOL at pv_power_advanced
possible future enhancement.
That is a metric shit-tonne of code.
Deprecation warning for the patch-aiohttp-client-session approach be damned. I can't think of a better way right now... All API calls are now tested without monkey-patching in substitute functions in Now tested for are exceptions in sites load, 429s on forecast fetch, 401 invalid key responses, plus API limit exhaustion. And misc other goodies. Next thing on the list is to remove the retry mechanism in get sites. I changed the init sequence a while back to prevent integration load if the sites fail to get, and Home Assistant automatically retries a load with a back-off mechanism. All retrying in the integration does is to slow down the Home Assistant restart sequence should a dodgy integration config be in place and forgotten about/ignored. |
I hadn't seen that (at least in debug logs) - I assume it is a ruff warning? I have seen 429s at 20:00 ADST last two days, (two retries each night) and cache stale warnings at UTC midnight last two days, which is interesting, but then again, I'm running old code 🤣 |
This is very interesting. Almost never happens to my prod rig, and never dev. You have weirdo scheduling (because Docker?), with stuff not happening exactly when it should and I wonder if that is somehow implicated. Unlikely, but it's the only thing I can think of. Feeling quite smug that I incorporated a stale check, as it sounds like you are my # 1 customer...
Only seen under pytest. If the core folk get ansey then they can tell me how to avoid it!
|
Far more appropriate...
Why anyone of sound mind would do this is beyond me. But it does result in exceptions that want heading off at the pass...