Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Display results in logs #2

Closed
kaihendry opened this issue Oct 8, 2019 · 23 comments
Closed

Display results in logs #2

kaihendry opened this issue Oct 8, 2019 · 23 comments
Labels
enhancement New feature or request

Comments

@kaihendry
Copy link

I can see the results were uploaded somewhere: https://github.com/kaihendry/ltabus/runs/251577950

Now how do I view them?

Thanks!

@emilhein
Copy link

emilhein commented Oct 8, 2019

In the upper right corner of your link it says artifacts (1). There you can download the lighthouse result.

@kaihendry
Copy link
Author

I was expecting to see some score inline.

e.g. https://developers.google.com/speed/pagespeed/insights/?url=https%3A%2F%2Fbus.dabase.com%2F%3Fid%3D82069

Why can't this just be on a URL instead?

@alekseykulikov alekseykulikov changed the title How do you view / analyse the results? Display results in logs Oct 9, 2019
@alekseykulikov
Copy link
Member

Hi @kaihendry , thank you for your suggestion.
Yes, as @emilhein recommended, you can download artifacts and explore the results.
It would be great to display some high-level results in logs after each URL run.

Some ideas:

Screenshot 2019-10-09 at 17 46 59

  • Display budgets table, like in LH report, if budgetPath set

Screenshot 2019-10-09 at 17 46 52

@exterkamp
Copy link
Collaborator

I'll also shamelessly plug the kind of cli output we use in the psi cli project.

Something like this:

(obv we would just have lab data.)

+1 to adding some charts/sparklines though, especially for opportunities.

@exterkamp
Copy link
Collaborator

I started playing with this on a branch

@exterkamp
Copy link
Collaborator

Im actually really loving cliui. Made a quick and dirty mockup of the perf category:

image

@alekseykulikov
Copy link
Member

Wow @exterkamp, this looks AMAZING! I like small details like a circle/square/triangle, colors, measurements, small piecharts. Overall formatting is 🔥.

@exterkamp
Copy link
Collaborator

I've started to expand this idea in a separate repo. I think it might be useful in ChromeLabs/psi, lhci itself and obviously here, so maybe having it be separate makes sense to keep this repo cleaner and just pull it in and call it like lhci. Thoughts?

@paulirish
Copy link
Contributor

I found https://github.com/codechecks/lighthouse-keeper which ends up displaying a markdown report of the result (example).

I am guessing this uses the output parameter of https://developer.github.com/v3/checks/runs/#update-a-check-run with a bunch of markdown?

Just wanted to point out that could be an option as well..

@exterkamp
Copy link
Collaborator

I am guessing this uses the output parameter of https://developer.github.com/v3/checks/runs/#update-a-check-run with a bunch of markdown?

From what I can tell actions can't return data like that yet? (or I just can't find the docs 😕 )

We could get an octokit client and then post a second status check with all this fun output...But then each action run would have 2 status checks, and I don't think you can make the action not fire off a check in the UI?

@alekseykulikov
Copy link
Member

Looking to @paulirish's https://github.com/paulirish/lighthouse-ci-action/pull/2 it's possible to have checks with markdown results as a part of CI. I'm going to hack on this as well :)

@paulirish
Copy link
Contributor

paulirish commented Nov 25, 2019

Going to try to boil down this issue...


Goals: clearly communicate to the developer the most important details.

Potential surfaces:

  1. Create a new github comment per push (a la lighthousebot)
    • See example
    • I personally think automated bot comments are noisy and I'm very eager to disable them. A comment on every push gets annoying fast, especially if the results aren't changing (or they're changing more than expected)
  2. Display more rich results in the workflow's logs
    • See example
    • Lighthouse used to have a "prettyprint" CLI output FWIW. It was a pain to maintain and ultimately its lack of interaction made it scale poorly for the data we had to convey.
    • I think, ideally logs only manually inspected when debugging a problem. They're too buried of a surface to put such attractive output.
  3. Post a "check"
  4. Post a link (somewhere..) to the Lighthouse HTML report
    • Is the highest fidelity representation, but this can't be automatically enabled due to data privacy concerns.

Right now I think using a Check is the most flexible choice here.

I think maintaining a LHR->check-markdown formatter will be somewhat painful, but that's an option. It's worth noting that raw HTML is accepted as Github-flavored Markdown, and <details> and <table> blocks work.. so a surprising amount of the HTML report can render decently when interpreted as GFM:

image


I totally agree using the download artifacts thing is terrible and we 100% need something better..

Ideally, we would really consider the UX here a tad more...

  • What information do we most want to see?
  • Is it possible to replace the HTML report or is that definitely necessary?
  • It's possible the diff view (or a markdown summary of the diff view?) is what is most wanted here.

I'd be curious to hear more of the usecases that inspire these feature requests.

@alekseykulikov
Copy link
Member

Thanks, @paulirish, for the excellent overview of our options!

Goals: clearly communicate to the developer the most important details.

I completely agree with the goal.

A potential solution could look like:

  • Github comments are useful to communicate failed assertions/budgets, but not every run. It should be possible to opt-out or use Slack notification instead.
  • Checks could be the primary UI to communicate results. They are more prominent than logs. Checks may contain results overview similar to feature request: report results as a PR comment? #17 (comment) with a link to a complete LH report. In case of an issue, they display the change, and the link to LHCI compare view.

A missing piece is a long-time storage of results and display of them.
temporary-public-storage is excellent, but it's just for 7 days and is public. Custom LHCI server is another option but requires the setup and support of Server/PG + updates and sync of LHCI versions.

We could try to store results in Github Gist and use Github Viewer to display them. It will need to support a file name and a commit sha (now the viewer finds a report by gist id). So the history of Lighthouse reports will be aggregated in the gist and checks use Lighthouse Viewer with a link to filename/sha.

To display changes and compare results, we could try to implement stateless Lighthouse Comparer. It will be similar to Lighthouse Viewer but powered by LHCI UI to compare two reports. In combination with Gists, it will be easy to privately store all results, have links to reports, and easily explore changes.

Gist storage requires the creation of the deploy keys, which is a trade-off. (not sure if it's possible to avoid)

It is just an idea. I'd like to hear your feedback.

@alekseykulikov alekseykulikov added enhancement New feature or request and removed help wanted Extra attention is needed labels Nov 26, 2019
@roelfjan
Copy link

roelfjan commented Dec 24, 2019

  • What information do we most want to see?
  • link to the actual report
  • know the score per category, per url. Compared with budget.
  • nice to have: see a quick overview of failing items

For me a Check which passes/fails based on perf. budget would work. If the check passes, I'm not interested in more information. If it fails I want to know, without too many steps, what's wrong. I think a Check is perfect for both use cases.

Some more inspiration for the UI for Checks below. Made In the pre GitHub Actions/Lighthouse CI era with a GitHub App which used WebPageTest and it's Lighthouse feature:

Screenshot 2019-12-24 at 10 28 02
Source

Also: I agree with @alekseykulikov on A missing piece is a long-time storage of results and display of them.. It's a high treshold to set up a custom server.

alekseykulikov added a commit that referenced this issue Mar 11, 2020
@alekseykulikov
Copy link
Member

Thank you, everyone, for a fantastic discussion with plenty of ideas!

The final solution is to use annotations to communicate each failed assert, and it's shipped with 2.3.0 release.

image

Reasons

  1. It is a native solution to communicate failures in the Github Actions environment. A user gets an email with a link to a report and the number of failed annotations.
    image
  2. An annotation shows failed tests grouped by URL and a link to Lighthouse report (if there're uploaded report).
  3. If tests fail, action shows failed assertions and attaches Lighthouse results for a quick debug. Example:
    image

Why not Github Check?

@paulirish made a great argument in favor of checks #2 (comment)

But after a bit of hacking, we found a few issues with Github Check:


Annotations have a minor issue now, they are plain text and don't support markdown. I hope it will be fixed in the future.

@magus
Copy link

magus commented Apr 30, 2020

curious if we have considered making the information available with core.setOutput.

https://github.com/actions/toolkit/blob/187d4aa6250e38be6346f705d565569850bf0c3f/packages/core/src/core.ts#L86

seems this would allow developers to easily build all these custom solutions themselves rather than trying to satisfy everyone with a single solution.

you could also pass the url to results into a github comment for example

@paulirish
Copy link
Contributor

@magus how is the output exposed to the user? It wasn't clear from the docs and I don't know anything that uses it

@magus
Copy link

magus commented May 2, 2020

Here's a super reduced example I created to show how something like this could work

https://github.com/magus/lighthouse-ci-output/runs/638824574?check_suite_focus=true

steps is a global that has keys for each step with an id specified in a workflow. you can index into it inside the workflow file to pass values into steps. in the example above the action.yml defines the metadata output, sets it as output inside action.js, and then echoes it back in the final workflow step using that global namespace ({{ steps.lhci.outputs.metadata }})

Without knowing details on the exact format of the .lighthouse directory generated, I think the rough flow for this repo would be something like ...

  1. read output metadata from the .lighthouse directory and store in-memory in object
  2. output metadata to shared workflow global steps
const core = require("@actions/core");
// ...
core.setOutput("metadata", JSON.stringify(metadata));
  1. consumers of this action could then specify any id they want for this step in their workflow
- name: Audit URLs using Lighthouse
  id: lhci
  uses: treosh/lighthouse-ci-action@v2
  with:
    urls: |
      https://example.com/
      https://example.com/blog
  1. any subsequent public or private actions they use can access the metadata exposed by lighthouse with {{ steps.lhci.outputs.metadata }}
- name: Write fancy Lighthouse CI github PR comment
  run: ./scripts/githubComment.js '${{ steps.lhci.outputs.metadata }}'

@paulirish
Copy link
Contributor

Wow yeah this example is super useful. Thank you.
So we can basically store strings, and the expectation is that output props are probably JSON.

Well that's easy enough.. I think we just need to decide on the schema of the output object but that seems doable. I'll file a new issue to track this.

@paulirish
Copy link
Contributor

Ah! Turns out this is already filed at #41. We can followup there.

@ChildishGiant
Copy link

Does anyone have an action that pretty prints this info?

@bep
Copy link
Contributor

bep commented Aug 16, 2021

@ChildishGiant there surely must be nicer ways to do this, but the snippet below works (add/remove what you need). Those steps needs to come after the lighthouse step(s):

name: Lighthouse 
on: 
  status:
jobs:
  lighthouse:
    runs-on: ubuntu-latest
    timeout-minutes: 20
    steps:
      // ... Lighthouse step(s) omitted
      - id: read-manifest
        run: |
          echo ::set-output name=manifest::$(cat .lighthouseci/manifest.json)
      - uses: actions/github-script@v4
        if: github.event.context == 'deploy/netlify' && github.event.state == 'success'
        env:
            MANIFEST: ${{ steps.read-manifest.outputs.manifest }}
        with:
          script: |
            const manifest = JSON.parse(process.env.MANIFEST);
            console.log('\n\n\u001b[32mSummary Results:\u001b[0m');

            var rows = manifest.map((row) => {
              return {
                "url": row.url.substring(row.url.indexOf("netlify.app") + 11),
                "performance": row.summary.performance,
                "accessibility": row.summary.accessibility,
                "seo": row.summary.seo
              }
            });

            console.table(rows);

@JackywithaWhiteDog
Copy link

JackywithaWhiteDog commented Jul 7, 2023

I've created a separate Lighthouse Viewer Action that displays the reports in Command Line Interface as a workaround. Perhaps it could be integrated as an optional feature in the future.

To use the action, set continue-on-error: true for Lighthouse CI Action, and feed the resultsPath and outcome from it as the input parameters:

name: Lighthouse Auditing

on: push

jobs:
  lighthouse:
    runs-on: ubuntu-latest
    name: Lighthouse-CI
    steps:
      - name: Checkout
        uses: actions/checkout@v3
      - name: Audit with Lighthouse
        id: lighthouse
        continue-on-error: true
        uses: treosh/lighthouse-ci-action@v10
        with:
          urls: |
              https://example.com/
          configPath: './lighthouserc.json'
      - name: Display Report
        uses: jackywithawhitedog/lighthouse-viewer-action@v1
        with:
          resultsPath: ${{ steps.lighthouse.outputs.resultsPath }}
          lighthouseOutcome: ${{ steps.lighthouse.outcome }}

Screenshot:

screenshot

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

10 participants