- Do CodeReviews
- Use scrum methodology
- Test and Production environments must be the same and very well defined: FreeBSD image or Ubunut Docker container
- Virtualize Everything! Awesome-Compose.
- Application logs
docker-compose logs
- Linux-based OS logs:
/var/log/messages
- ssh logs
/var/log/audit/audit.log
- problem: it's difficult to include visualization in this workflow but essential to fast prototyping.
- Perhaps text-to-diagram
- AsciiFlow
- Spectral, a diverging colormap from matplotlib.
- Tikz Editor
- Matplotlib
- d3.js example
- Types of diagram:
- JSON API is the easiet API - follow the rules
- Requesting - use Python-requests
- Pandas CheatSheet
- sqlAlchemy usage
- Pandas cheatsheet
- PostgreSQL (datetime functs)
- Consider SQL (but don't use
select *
) - Be aware of feature casualties of large databases
- Data Science - Hierarchy of Needs
- Metabase
- Standards: nist / isf
- Audit security
- Understand SSH tunnels
- Anticipatory Failure Determination
- Secret Management for APIs
- Code-tags (PEP-0350) / Stop using TODO
- vscode tip: turn off openDiffOnClick
- git bash solarized 🌞
- Music for programming
- OpenStack: Linting and Pre-commit Hooks
- open source cousin to GPL3
- Machine Learning Master is a great resource!
- Probabilistic Machine Learning: Advanced Topics
- Algorithms for Modern Hardware
- the-importance-of-humility-in-software-development
- All gui is bloat and we should just go back to using the abacus /NashFPV (youtube comment)
- doing it right vs doing it on time
- Closed Source Software: If you cannot check what it does / how it works, do not use it for the sake of security.
- Software Design X Dieter Rams
- approaching hard problems
- First Principles
- Career Complacency
- SPACE (satification, performance, activity, communication, efficiency)
- Google's Four DevOp KPIs
- Details about pair programming
- Is extreme programming a thing?
- Collaboration (tools: csvbox, pyodide)
- Resolve Joel's 12 Questions
- keeping the repo nice
- keep repo nice with makefile
- Use NestedText: No YAML, TOML, or INI. CSV/TSV is acceptable.
- Things you should do now
- Follow naming conventions
- Follow software engineering laws
Music production, streaming and whatnot.
- Kinto Multi-OS shortcuts
- On-Prem Speech-Transcribing
- What Distinguishes Engineers
- Architechts Playbook
- Apple ML Intro
Rob Pike's 5 Rules of Programming
- Rule 1. You can't tell where a program is going to spend its time. Bottlenecks occur in surprising places, so don't try to second guess and put in a speed hack until you've proven that's where the bottleneck is.
- Rule 2. Measure. Don't tune for speed until you've measured, and even then don't unless one part of the code overwhelms the rest.
- Rule 3. Fancy algorithms are slow when n is small, and n is usually small. Fancy algorithms have big constants. Until you know that n is frequently going to be big, don't get fancy. (Even if n does get big, use Rule 2 first.)
- Rule 4. Fancy algorithms are buggier than simple ones, and they're much harder to implement. Use simple algorithms as well as simple data structures.
- Rule 5. Data dominates. If you've chosen the right data structures and organized things well, the algorithms will almost always be self-evident. Data structures, not algorithms, are central to programming.
Pike's rules 1 and 2 restate Tony Hoare's famous maxim "Premature optimization is the root of all evil." Ken Thompson rephrased Pike's rules 3 and 4 as "When in doubt, use brute force.". Rules 3 and 4 are instances of the design philosophy KISS. Rule 5 was previously stated by Fred Brooks in The Mythical Man-Month. Rule 5 is often shortened to "write stupid code that uses smart objects".
See also Akins laws of spacecraft design
- How to ssh properly
- Pipenv on Ubuntu 18.04
- Conventions for Command Line Options
- Awesome README profiles
- NoHello (Team Communication)
- IdownVotedBecause
c -> continue?
u -> until (lineno or) exec jumpout
s -> step
n -> next breakpoint
docker run -p 8888:8888 --name atp_jupy -e GRANT_SUDO=yes -e JUPYTER_ENABLE_LAB=yes -e JUPYTER_TOKEN="password" -v C:\Users\XXXX\Documents\:/home/jovyan/work --user root jupyter/datascience-notebook
jupyter-lab --generate-config
# uncomment following lines in ~/.jupyter/jupyter_notebook_config.py
c.NotebookApp.token = ''
c.NotebookApp.password = ''
# or just run
python -m jupyter lab --no-browser --port=8889 --LabApp.token=password
Console + Logging [src]
Overwrite or Append program stdout to logfile while displaying stdout and stderr
.
To write the output of a command to a file, there are basically 10 commonly used ways.
Please note that the
n.e.
in the syntax column means "not existing". There is a way, but it's too complicated to fit into the column. You can find a helpful link in the List section about it.
|| visible in terminal || visible in file || existing
Syntax || StdOut | StdErr || StdOut | StdErr || file
==========++==========+==========++==========+==========++===========
> || no | yes || yes | no || overwrite
>> || no | yes || yes | no || append
|| | || | ||
2> || yes | no || no | yes || overwrite
2>> || yes | no || no | yes || append
|| | || | ||
&> || no | no || yes | yes || overwrite
&>> || no | no || yes | yes || append
|| | || | ||
| tee || yes | yes || yes | no || overwrite
| tee -a || yes | yes || yes | no || append
|| | || | ||
n.e. (*) || yes | yes || no | yes || overwrite
n.e. (*) || yes | yes || no | yes || append
|| | || | ||
|& tee || yes | yes || yes | yes || overwrite
|& tee -a || yes | yes || yes | yes || append
-
command > output.txt
The standard output stream will be redirected to the file only, it will not be visible in the terminal. If the file already exists, it gets overwritten.
-
command >> output.txt
The standard output stream will be redirected to the file only, it will not be visible in the terminal. If the file already exists, the new data will get appended to the end of the file.
-
command 2> output.txt
The standard error stream will be redirected to the file only, it will not be visible in the terminal. If the file already exists, it gets overwritten.
-
command 2>> output.txt
The standard error stream will be redirected to the file only, it will not be visible in the terminal. If the file already exists, the new data will get appended to the end of the file.
-
command &> output.txt
Both the standard output and standard error stream will be redirected to the file only, nothing will be visible in the terminal. If the file already exists, it gets overwritten.
-
command &>> output.txt
Both the standard output and standard error stream will be redirected to the file only, nothing will be visible in the terminal. If the file already exists, the new data will get appended to the end of the file..
-
command | tee output.txt
The standard output stream will be copied to the file, it will still be visible in the terminal. If the file already exists, it gets overwritten.
-
command | tee -a output.txt
The standard output stream will be copied to the file, it will still be visible in the terminal. If the file already exists, the new data will get appended to the end of the file.
-
(*)
Bash has no shorthand syntax that allows piping only StdErr to a second command, which would be needed here in combination with
tee
again to complete the table. If you really need something like that, please look at "How to pipe stderr, and not stdout?" on Stack Overflow for some ways how this can be done e.g. by swapping streams or using process substitution. -
command |& tee output.txt
Both the standard output and standard error streams will be copied to the file while still being visible in the terminal. If the file already exists, it gets overwritten.
-
command |& tee -a output.txt
Both the standard output and standard error streams will be copied to the file while still being visible in the terminal. If the file already exists, the new data will get appended to the end of the file.
# homebrew
imagemagick
pyenv
tmux
tree
# what processes are using port 8888?
$ netstat -aon | grep 8888
TCP 127.0.0.1:8888 0.0.0.0:0 LISTENING 4984
-- kill process 4984
$ tskill 4984