diff --git a/project-preplanning.org b/project-preplanning.org index 1b9e967..0c94fb2 100644 --- a/project-preplanning.org +++ b/project-preplanning.org @@ -20,7 +20,7 @@ While apps exist for the two questionnaires mentioned in the `pitch', there is s - combined app :: most simply, there are two apps, having a single app for both questionnaires is surely preferable (/stretch goal/: define questionnaires in eg JSON, the app can dynamically expand without needing updated) - encryption :: even though the data is stored locally, it is still worthwhile to encrypt it at rest for privacy reasons - export‡ :: being able to export eg a PDF or similar would be a boon for sharing with clinicians (eg this could be printed off to be added to notes); other formats are possible too - - patient notes† :: being able to take freetext notes, which could be associated with a questionnaire or be `freestanding', would aid memory for patients in consulations + - patient notes† :: being able to take free text notes, which could be associated with a questionnaire or be `freestanding', would aid memory for patients in consultations - graphing† :: since these questionnaires usually produce a score, this could be charted over time (handy for spotting patterns) - reminders† :: a periodic notification (eg weekly, bi-weekly, monthly) would help remind patients to track their symptoms @@ -50,7 +50,7 @@ Where the video is quiet it is unlikely to be a highlight. Interesting bits migh 1. where the *audio level peaks* (eg someone speaking under stress, multiple people speaking, loud part of a game) 2. where there is *laughter*, something funny probably happened or was said - 3. where there is lots of *motion* something interesting may be happening:w + 3. where there is lots of *motion* something interesting may be happening To find these points: @@ -72,7 +72,7 @@ There are a few caveats: - I don't know how well the `laughter detection' model works with the sample data (ie my own video files) - I don't know the first thing about training a model or tuning one (I suspect I would need several thousands of samples to train, perhaps fewer to tune?) - - I lack hardware for tuning (my GPU is ancient and doesn't have features of newer GPUS) + - I lack hardware for tuning (my GPU is ancient and doesn't have features of newer GPUs) - Working with video files can be quite slow in general ** How This Project Might Proceed