-
Notifications
You must be signed in to change notification settings - Fork 655
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Upload target filesystem, Override File Naming #142
Comments
Hey @alexfornuto is the incremental number sequence on the files something that you are looking for too? In your example I see you used a pattern: {url}-{sequential-number}, but a result from the lib looks more like this: Or based on the image the only thing you want to change is the prefix |
Sorry for not being more precise. The numbers in my example correlate to the three passes I've configured on my end. Since I know that there are three passes, those numbers are predictable, whereas the current timestamp format is not. I don't care about the |
@alexfornuto we really don't want to encourage folks to read directly from the If this is something you'd really like to see, I think we treat this as a feature request for |
@patrickhulce that |
Thanks for the detailed response @alexfornuto 🙂 |
Haha, it most certainly is the better option and sounds like something we'll eventually need to tackle. I was looking for a quick way to unblock you @alexfornuto, but I suppose if you give a mouse a cookie... 😉 For now I view this as pretty low priority because all of the information is already there to be able to rename the files yourself (by parsing JSON), so a naming pattern is more convenience than new power (unlike #139). |
Welp, the can of worms has been opened now. Sorry!
Forgive my ignorance. I know that this must be the case, but I don't know how. If you could give me a brief example of how one would do this, I could probably unblock myself from there. |
Ah gotcha, no worries! Each of the You can pluck out |
OK, thanks! That definitely helps. But in order to build a script that plucked data out of those files, I'd need to know what they would be named, right? It seems like a bit of a catch-22 to me. |
start by reading all |
@paulirish indeed - I suppose this is where my own limitations as a technical writer forced to dev his own platform are catching up to me. I'll go away now and see if I can't figure out globbing filenames in JS. |
hey @patrickhulce, just to clarify this issue:
Questions:
|
These questions raise good points @jzabala in that I don't think we should pursue this right now until we have more than one potential consumer to iron out what the API should look like :) |
Awesome @patrickhulce. If this issue gets a couple more +1, I wouldn't mind on returning the discussion about the API design and implementing it 👍🏼 |
I think this is the most important issue in this library. I would like to automate reading the results easily. The only reasonable approach that comes to my mind right now is
That's too much and it actually would be another automation script. There may be a map between input URLs and output. The library may continue to create files as it wants in |
Can I ask what you're doing with the scores? There might be a higher level function we can automate within lighthouse-ci, and I'm not sure that a filesystem dump will help you here. You'll still have to read the directory to get the list of reports, do some parsing to find the one you're interested in, and then read the scores from the JSON whether this feature is implemented or not.
I'm not sure I understand what you mean by this. Do you just want a summary JSON file that looks like {
"https://example.com/page1": [
{"performance": 90, "pwa": 32},
{"performance": 91, "pwa": 32},
{"performance": 89, "pwa": 32},
],
"https://example.com/page2": [
// ...
]
} |
I have a system that sends an email that contains a changelog, to the client for each deploy. I have started to think about attaching the scores of performance, accessibility etc. to the bottom of the email when I saw this tool. The folder The JSON summary you wrote works for me but one still may need to access more properties. The way I'm going to use it will probably be like: (after parsing, inside an email) Performance Scores Report
Homepage
Performance: 91 | Accessibility: 92 | Best Practices: 93 | SEO: 94
Product Pages
Performance: 91 | Accessibility: 92 | Best Practices: 93 | SEO: 94 Newly come to my mind that we may specify an identifier with the run, it may be a version number (1.3.44) or some other identifier and then we can access to the results in JSON format: lhci autorun --id=1.3.44
# later on
lhci report --id=1.3.44
# and we have a JSON file: lhci-1.3.44.json :) This would be better from the previous mapping idea. |
Ah, there's a slight disconnect here, As for your use case, I'm not sure there's much in a |
Yes. A manifest may save us from detecting the filenames of the report and it would be more sustainable. I'm not much interested with the |
@patrickhulce could you please specify expected API to be able to help you with it? |
I'm currently thinking that the usage looks something like this...
and it would output the median LHR reports into
contents of [
{
"url": "http://localhost:8080/path/to/page",
"representative": true,
"extension": "html",
"filePath": "./localhost_8000-path_to_page-2020_01_30_15_12_12.report.html",
"summary": {"performance": 0.52, "accessibility": 0.79, "seo": 1, "best-practices": 0.23}
},
// ...
] eventually support options like Any thoughts on this API folks? |
What if we replace remove Also, with
Also, it would be nice to be able to use |
Are there any updates on this? I'm working on running Lighthouse on each app release using https://github.com/treosh/lighthouse-ci-action. I would like to have the contents of I don't mind creating a PR for this too, but I've not very familiar with this project and might need some guidance if that's ok 🙂 |
Thanks for the offer @staceytay! This is a slightly larger unscoped effort that has some interplay with breaking changes in #119 though, so I'm not sure it's a great candidate for external PR help at the moment. FWIW, all of the data is available for the GH action to build this output themselves. If we just agree on an interface for the manifest then there's no need for this to block that issue. How does everyone feel about... interface EntrySummary {
performance: number // all category scores on 0-1 scale
accessibility: number
'best-practices': number
seo: number
pwa: number
}
interface Entry {
url: string // finalUrl of the run
isRepresentativeRun: boolean // whether it was the median run for the URL
jsonPath: string
htmlPath: string
summary: EntrySummary
}
// Manifest contains an array of entries, 1 for each run.
// Let's say all are included by default, skipping over the `--include-all-reports` bit.
type Manifest = Array<Entry>
// Alternative, we group the entries by url
type Manifest = Record<string, Array<Entry>> |
@patrickhulce sure thing, thanks! At least for the use case I'm thinking of, this interface works, with either |
@patrickhulce +1 for the format as an array.
It would be great to support something like It's possible to build this data manually, but it may be different from the future LHCI output, and the logic of |
Interesting, I would've expected a vote in favor of the map to do things like We can do the array, SGTM :) |
Unfortunately, #328 doesn't solve one of the main parts of this feature request, dictating the file naming. My job building CI processes to format this data is made harder by not having predictable and repeatable names for the the data files. But thanks for the improvement! EDIT: |
That's what
|
Oh, excellent! 👍 |
Quick poll on the expected behavior if 🚀 overwrite it |
What about: |
I don't want to assume that |
For more context: I'm not keeping any data longer than a CI job. I know that this is against Lighthouse best practice or whatever, but I need to run the test a few times to pull out outliers and then average scores. The tests can be wildly unpredictable and inaccurate sometimes. I'm trying to mitigate that within the confines of a CI job reporting the affects of a code/content change from a single PR. Having said all that, I've voted for 🚀 as my second choice. |
Gotcha thanks for the extra context!
I'm very interested to hear why increasing |
I'll be honest, I haven't looked at my CI testing since December, when I made this issue. When I can look at it again, I'll look into |
Thanks to #139 I can now handle the JSON and HTML reports locally without having to upload the results to an LHCI server (thanks for that!).
In order to programmatically handle the files generated, I'd like to be able to define a consistent pattern for the files. For example:
IF I run
lchi collect --url=https://pantheon.io/docs --url=https://some.staging.environment.io/docs
,AND I define some extra parameter like
--format=$URL-$PASS_NUM
(just spit-balling on this)THEN the contents of
.lighthouseci
would look like:Not that the filename needs to contain the full URL, this is just an example where it uses a parameter already defined in another flag. Anything consistently predictable would suite my needs.
The text was updated successfully, but these errors were encountered: