From eacddb053f49a939f1b445a47b34b39f901b5afe Mon Sep 17 00:00:00 2001 From: "Gergely Daroczi (@daroczig)" Date: Mon, 29 Jan 2024 00:42:21 +0100 Subject: [PATCH] restucture sections --- README.md | 27 ++++++++++++++++++++------- 1 file changed, 20 insertions(+), 7 deletions(-) diff --git a/README.md b/README.md index fd40329d..58026e35 100644 --- a/README.md +++ b/README.md @@ -9,11 +9,19 @@ in alpha/beta testing. - [ ] describe how to set up auth for each vendor - [ ] list required IAM permissions for each vendor -### Database schema +## Database schema Database schema visualized and documented at https://dbdocs.io/spare-cores/sc-crawler -### Usage +## Usage + +The package provides a CLI tool: + +```shell +sc-crawler --help +``` + +### Print table definitions Generate `CREATE TABLE` statements for a MySQL database: @@ -21,13 +29,15 @@ Generate `CREATE TABLE` statements for a MySQL database: sc-crawler schema mysql ``` -Fetch and standardize datacenter, zone, products etc data into a single SQLite file: +See `sc-crawler schema` for all supported database engines. -
+### Collect data + +Note that you need specific IAM permissions to be able to run the Crawler at the below vendors: -Required permissions for AWS +
-You will need the following IAM permissions to be able to run the Crawler in AWS: +Amazon Web Services (AWS) ```json { @@ -52,11 +62,14 @@ You will need the following IAM permissions to be able to run the Crawler in AWS
+ +Fetch and standardize datacenter, zone, products etc data into a single SQLite file: + ```shell rm sc_crawler.db; sc-crawler pull --cache --log-level DEBUG --include-vendor aws ``` -### Other WIP methods +## Other WIP methods Read from DB: