Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

UAT testing reproduction assistant ( Ready for Review ) #124

Open
joe-getcouragenow opened this issue Jan 28, 2021 · 0 comments
Open

UAT testing reproduction assistant ( Ready for Review ) #124

joe-getcouragenow opened this issue Jan 28, 2021 · 0 comments

Comments

@joe-getcouragenow
Copy link
Contributor

joe-getcouragenow commented Jan 28, 2021

You know how you try to reproduce a bug and it takes forever to get your dev machine and the data in the system setup to actually reproduce the bug ? Well this fixes that.

If a bug is too hard to reproduce, we ask the user to go to the org-y ( lets call it the "base" data bootstrapper ), tell them to reproduce it there and so have them use the modules to get the data to that state, and then have a button to snapshot the DB to GCS with the Issue number.

Store:

  • DB's
  • gitsha
  • config

As a dev, booty can then pull the git hash for all repos, pull the db's and put them onto the file system in the right place.

You just saved ourselves quite a bit of time effort and cognitive load, and for users decreased how long it takes for a bug to get fixed.

I dont think its impractical to ask a user to do the 2 minutes of work to get the base system into the required state. They can use 2 tabs, and copy and paste across to spend things up.

This would work for UAT but also for real users.

Design

The natural extension of this is to make it such that a User can get a clean Base system to do UAT from, so that they dont cross pollute each other.

Web Site is running on Hertzer under caddy as a Virtual Host.

Booty is running on Hertzer under caddy as a Virtual Host.

The flow would be like:

start

User is at Project web site and clicks "Demo --> Create Demo".
OR
User in in the app itself, and clicks "Support --> Create Ticket"

We open the booty Admin in a new tab with the gitsha parses across.

booty provision

Booty GRPC API gets the call to provision a demo and the gitsha needed from the Booty Web GUI.

Pulls the GCS bucket and places it into a folder called "user123-gitsha-datatime-basename.domain.org"
user123 = a random username
gitsha = so we can easily identify it as devs if we need to.
datatime = so a cleanup cron job in booty deletes instances old than a month and in caddy config.
basename = the equivalent of what we now call "org-y". We should rename that "base" as it indicates its the base data bootstrap clearly.

Adds the required config to caddy pointing to the binary and URL.

GRPC APi returns with the new URL of the newly provisioned site to the user.

user reproduces

User is given the URL, and off then go.

They can copy and paste across from the real app into the demo app any data needed into the GUI to reproduce. And they should definitely see the exact same bug.

If they cant see the bug in the demo system then its 100% that they did not set it up correctly. Hence showing the value of this system in terms of saving use heaps of time.

User has reached a point where they have reproduced the bug, and then clicks a button called "Save Reproduction".

booty save state

This calls the booty GRPC API with the data:

  • URL they did the repro on, so a dev can then go back in and play with it before he has even had to screw around on his machine.
  • user email, so we can get back in touch. Remember we did never require them to login. Maybe later we can, but its minor.

Booty now needs to call into the Demo Server over the GRPC API to tell it to use it's standard functionality to save the DB's

  • Booty has the URL, but needs to pass security, because we don't want just anyone being able to do this on a non demo deployed machine, and i don want to hack this up...
  • So a Provisioning Security Concept is needed !
    • We know we only want SuperAdmin or OrgAdmin initiating SnapShots and backups. Cron job currently does it.
    • We know we want Booty to be able to be in charge of 1,000 of Org deployments. Makes life easy for us, orgs, etc.
    • The same API call is used for the CLI too.
    • Sys backup does have security guard modelled, and only SuperAdmin and Cron can run it.
    • Not sure right now best way to do this. Obvious way is a token in each Server config mapped to the SuperAdmin, If Booty has that token, it can pass security check at GRPC API and ask for the DB's to be backed up. It will work and its not too aweful. But it means Booty needs a config now to hold the token. SO we dont bloat out booty with lots of config, maybe we need to start many Booty's like we were thinking. Booty-Dev, Booty-User with different imports and hence different CMD, etc

Booty then:

  • creates a ticket in a GCS bucket.
  • downloads the snapshot from the Demo Server to itself, and puts into the GCS bucket.
  • saves the email in some manifest file.
  • We can if we want also record the "steps to reproduce" by recording all router changes in the demo, and passing those into the Manifest too.

Booty returns on GRPC API to GUiI the result.

** user makes ticket**

User then clicks button called "Open Ticket".

We client side use the flutter webview ( from inside a flutter webview, but i think it works when i tested that) to load up the correct github URL and pass the args in the querystring

  • URL
  • we dont pass email, because they might want privacy, and we captured in into GCS on previous step.

Github parses the args, and loads up our User Template.

  • shows the URL
  • Lets the user write the bug, etc.
  • Gets the exact URL to the page where the bug is.

BTW this will make it easy to make a FLutter, golang ticketing system later..

Code

Booty GUI built with FLutter

  • GRPC stuff is so solid for us, and when on local machine it can run as single binary and be an awesome Booty Admin for us and users. It will all just be quicker and better...

This would imply that booty is given the ability to have a GRPC API, and that the Product Web site Demo page can talk to it.
Turn on CORS so no one else can talk to booty.

Booty runs off GCS store to get the provisioning bits it needs. This presumes that booty deployed everything to GSC when we did a release, and this is already in the Booty Epic we are doing.. It shows how github based releases are probably not work it for us as it has not much utility for us or the users. Its far better a user does to product website, plays with demo, and then downloads the "package" from the website, which gets it from GCS. It keeps everything smooth for us and users.

The rest as far as i can see is obvious. Not only that but all the code is the exact same code that a dev or uat person would do to run the system on their local machine, so there is very little code to write.

The main thing is that booty just needs its own web server and Web GUI. It would make things really turn key with little work.

In order for booty to be able to pull the gitsha for all repos, it's easy for use because of JSONNET. The gitsha that is given to the developer for the reproduction is for the main repro, and inside that JSONNET has the gitsha's of all the sub repro's.
Booty tool already or will have a feature to do that bit for a dev.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant