This repo implements the Plant Tracer web application. The top-level planttracer.com website is currently a static HTML site. This repo runs in a sub-domain https://app.planttracer.com/ and provides the app that runs in a browser on a mobile phone or desktop.
The application consists of two parts, both of which are contained in this repo:
-
A web client written in JavaScript using a custom lightweight DOM utility (jQuery has been completely eliminated). Most of the app is located in deploy/app/static/, although these JavaScript files require some variables set on the HTML pages served out of deploy/app/templates/ to function.
-
A back-end application written in Python using the Flask framework. This application can be served using an Apache webserver with the
gunicornapplication server, or using Amazon Lambda or another serverless framework.
Server-side storage is provided by Amazon S3 and Amazon DynamoDB. For local development and Github Actions, the Makefile will install Minio (a full-featured S3 clone) and Amazon's DynamoDBlocal (a minimal clone):
Movies, individual movie frames, and zip files containing movie frames are stored in Amazon S3.
Course data, account data, and movie frame annotations are stored in Amazon DynamoDB.
Client-side storage:
Authentication tokens are stored in a client cookie called api_key.
The user's current course is currently stored in the database, but should be moved to the client cookie.
Movies, movie frames, and ZIP files are downloaded from AWS S3 directly to the client using signed HTTP GET requests generated by the Python code running on the server.
Uploads of movies and movie frames are from the JavaScript client directly to S3 using signed HTTP POST commands.
The remaining static and dynamic content is downloaded from the server to the client using HTTP GET commands.
This design makes it easy to move from the server-based architecture to the AWS Lambda-based architecture, as Lambda limits HTTP GET and POST responses to 6MB and uploads to around 256KB. In a pure Lambda deployment, static content should probably be moved to a CDN.
AWS SAM template that uses cloud formations to:
- Create a new VM
- Create the necessary DynamoDB tables all with the given prefix.
- The bucket is always an existing bucket (it outlives the stack as the long-term archive of student videos). The Lambda is invoked via its HTTP API; it can move uploaded objects to their final keys as needed, write research/attribution metadata into the MP4 file (so the object remains self-describing when the database is gone), update DynamoDB, and log to the logs table.
This repo is designed to be checked out to a directory such as
/opt/webap or $HOME/webapp in the service account of the user that
is running the application. The application runs out of the git
repo.
For the stand-alone server built from the template, the checkout location is /opt/webapp.
Once it is checked out, be sure to set the (environment variables)[docs/EnvironmentVariables.rst].
For testing, this repo can be checked out anywhere, e.g. $HOME/gits/webapp.
Once it is checked out, you will run:
make install-ubuntu # if you are on Ubuntu
make install-macos # if you are on MacOS
make bin/mino # to download and install minio
make bin/DynamoDBLocal.jar # to download and install DynamoDBLocal
Once you have things installed, you can try:
make make-local-demo
make run-local-demo
If that works, you can try the full-blown experience with:
make run-local
At this point is is probably a good idea to read the entire Makefile
This code should run out-of-the-box on most Linux and macOS systems.
- Python 3.11 or above
- pip (The Makefile creates and runs out of a virtual environment)
- OpenCV (use opencv-python-headless)
To install prerequisites:
make install-ubuntu or make install-macos as appropriate.
Please see (Makefile)[Makefile] for the variables that need to be set. The critical ones that matter are:
| Variable | Meaning |
|---|---|
AWS_PROFILE |
The profile that you are using (in $HOME/.aws/config) |
AWS_REGION |
The region you are deploying to. Set to local for testing locally with minio and dynamoDBLocal |
STACK |
The name of the stack that you are deploying to. Must be unique in your AWS account |
STACK_STAGE |
This is legacy, when we actually had a staging stack. Now you stage by just deploying to a different stack name |
DYNAMODB_TABLE_PREFIX |
The prefix for your DynamoDB Table Names. Must be unique in your AWS account. |
AWS_REGION - Set to local to use minio and localDynamoDB for testing.
Other environment variables you may wish to set:
PLANTTRACER_S3_BUCKET - The S3 bucket to use, e.g. s3://planttrancer-demo
DYNAMODB_TABLE_PREFIX - The prefix to use for the DynamoDB tables.
DEMO_COURSE_ID - If set, Planttracer runs in demo mode, and this is the ID of the demo course.
PLANTTRACER_CREDENTIALS - A configuration file that has email credentials
Planttracer also uses these AWS environment variables, which are set to the local host or AWS as appropriate.
AWS_REGION
AWS_SECRET_ACCESS_KEY
AWS_ACCESS_KEY_ID
AWS_ENDPOINT_URL_S3
AWS_ENDPOINT_URL_DYNAMODB
Makefile- contains a bunch of targets for people who can't remember the AWSsamcommands.template.yaml- this is the AWS SAM template that does the magicsamconfig.yaml- This records the parameters and other functions that change with a deployment. It does not need to be put into version control.