Go to file
2025-07-17 11:08:22 +02:00
.vscode file was created while setting up Python environment within VSCodium 2025-07-15 23:15:09 +02:00
tests first steps towards testing: added app fixture and first test function 2025-07-16 21:32:34 +02:00
the_works extended inforation about code generated with sqlacodegen 2025-07-17 11:08:22 +02:00
utils split generated code into tables and declarative ORM classes 2025-07-16 21:26:04 +02:00
.flaskenv changed how the app gets its config values 2025-07-16 17:21:14 +02:00
.gitignore added ".vscode" directory and any ".env" file 2025-07-16 17:10:39 +02:00
README.md extended inforation about code generated with sqlacodegen 2025-07-17 11:08:22 +02:00
requirements.txt moved dependency information to README.md; added file "requirements.txt" that was generated with pipreqs 2025-07-15 23:10:53 +02:00
tmp.md these are all files of lesser importance 2025-07-15 23:14:07 +02:00
works.oldschema.sqlite these are all files of lesser importance 2025-07-15 23:14:07 +02:00

The Works

Configuration

The file .flaskenv contains the default configuration. Flask reads the file at startup and adds its key-value-pairs to the runtime environment as environment variables. When the Flask app object is being created in __init__.py, all environment variables that start with the prefix "FLASK_" get added to the app configuration.

This is true for any prefixed environment variable, not just the ones from .flaskenv. It is therefore possible to set additional config parameters by hand before running the app. Just make sure to prefix the variable name with "FLASK_".

Configuration values from the runtime environment can be overridden by using Flask's -e command line switch to pass a second config file to the app. This file gets processed the same way as .flaskenv, which means that all its keys must be prefixed with "FLASK_". These vars take precedence over the default configuration.

Finally, you can override config settings with Python during the Flask app's instantiation through the factory. To do this, simply pass a dictionary with (unprefixed) key-value-pairs to create_app() method as named parameter config. Settings passed this way take precedence over those from the default configuration and additional config files.

Flask commands

Execute commands with python -m flask <command>. You don't need to specify --app the_works as long as the environment variable "FLASK_APP" is set to "the_works"; the default configuration file does this.

Available commands:

  • run: Serve app (don't use for production).
  • shell: start a shell within the app context (I can i.e. import specific table models and test the data structures returned by ORM methods like select())

Dependencies

Python Packages

flask flask-sqlalchemy python-dotenv Pillow pytest flask-debugtoolbar (optional) sqlacodegen (optional; only ever used from the command line during development)

CSS and Javascript

PicoCSS (regular version) + SwitchColorMode.js (from Yohn's fork) DataTables.[js|css]

Icons

some icons from heroicons.com

Other useful stuff

Generate SQLAlchemy code from an existing database

Right now the_works reflects an existing database in order to infer the underlying data models for SQLAlchemy. Of course, this only works if all the tables already exist in the database. If the_works is run with an empty database (this happens when running tests, for example), the app will create fresh tables in the database. The necessary information about the tables can be generated from an existing database with the help of sqlacodegen. Just run this command inside the project's root directory:

sqlacodegen --generator tables sqlite:///path/to/good/db.sqlite > ./the_works/tables.py

The tool sqlacodegen can also generate Python code declaring the data models directly. This would make the use of reflection obsolete. The command would be:

sqlacodegen --generator declarative sqlite:///path/to/good/db.sqlite > outputfile.py

Export database schema

Method 1: sqlite3 the_works.sqlite .schema > outputfile.sql

Method 2: Open DB in SQLitebrowser and use File -> Export -> Database to SQL file …

  • keep original CREATE statements
  • export schema only
  • overwrite old schema (DROP TABLE, then CREATE TABLE)

Generate requirements.txt

I use pipreqs to generate the file requirements.txt. The package scans all source files for import statements and uses those to extract all required Pip packages.