Skip to content

Commit

Permalink
restucture sections
Browse files Browse the repository at this point in the history
  • Loading branch information
daroczig committed Jan 28, 2024
1 parent c5f197e commit eacddb0
Showing 1 changed file with 20 additions and 7 deletions.
27 changes: 20 additions & 7 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,25 +9,35 @@ in alpha/beta testing.
- [ ] describe how to set up auth for each vendor
- [ ] list required IAM permissions for each vendor

### Database schema
## Database schema

Database schema visualized and documented at https://dbdocs.io/spare-cores/sc-crawler

### Usage
## Usage

The package provides a CLI tool:

```shell
sc-crawler --help
```

### Print table definitions

Generate `CREATE TABLE` statements for a MySQL database:

```shell
sc-crawler schema mysql
```

Fetch and standardize datacenter, zone, products etc data into a single SQLite file:
See `sc-crawler schema` for all supported database engines.

<details>
### Collect data

Note that you need specific IAM permissions to be able to run the Crawler at the below vendors:

<summary>Required permissions for AWS</summary>
<details>

You will need the following IAM permissions to be able to run the Crawler in AWS:
<summary>Amazon Web Services (AWS)</summary>

```json
{
Expand All @@ -52,11 +62,14 @@ You will need the following IAM permissions to be able to run the Crawler in AWS

</details>


Fetch and standardize datacenter, zone, products etc data into a single SQLite file:

```shell
rm sc_crawler.db; sc-crawler pull --cache --log-level DEBUG --include-vendor aws
```

### Other WIP methods
## Other WIP methods

Read from DB:

Expand Down

0 comments on commit eacddb0

Please sign in to comment.