2 Commits

Author SHA1 Message Date
  Rob Hallam e7e9932f6b [meeting 2] add user stories doc 4 months ago
  Rob Hallam 3e0037dd4d [meeting 2] add pipeline classes doc 4 months ago
3 changed files with 363 additions and 0 deletions
Split View
  1. +322
    -0
      highlight-pipeline-classes.org
  2. +31
    -0
      highlight-pipeline-userstories.org
  3. +10
    -0
      meetings.org

+ 322
- 0
highlight-pipeline-classes.org View File

@@ -0,0 +1,322 @@
* Class Overview

** Prepipeline Class

Placeholder, implied from [[*Post-Pipeline Actions]].

User could action any setup needed here, eg mounting of cource file media.

** InputFiles

Approach: Described elsewhere, collect files and if relevant map to options (which feature extractor, duration etc).

Options: per-class options (eg ~config_file_path~ to be used with an ~InputFiles~-class which takes user input from a config file)

*** Class Sketch

#+begin_src plantuml :results output :file inputfiles.svg
scale 1000 height

InputFiles -- InputFilesArgs
InputFiles -- InputFilesJSON
InputFiles -- InputFilesYAML

abstract class InputFiles {
InputFilesOptions options

{abstract} get_files(*args, **kwargs)

}

note right of InputFiles::get_files
returns JSON
end note

class InputFilesArgs {}

class InputFilesJSON {}

class InputFilesYAML {}
#+end_src

#+RESULTS:
[[file:inputfiles.svg]]

InputFiles
+ InputFilesOptions
(eg config_file_path for eg InputFilesJSON or InputFilesYAML)

get_files(**kwargs): JSON

** FeatureExtractor Classes

Basic approach is the usual:

- setup / prepare / pre-run (eg create an audio file if the tool does not like AV media)
- work / run (find the features)
- teardown / cleanup / post-run

The /work / run/ phase will need to either capture stdout or read files created by the tool to collect timestamps/intervals. [implementation detail for each FeatureExtractor]

Options (with suggested defaults):

- working directory (~/tmp/highlightgen/~)
- cleanup temporary files (~True~)
- log to stdout / file / none (~None~)
- padding for minimum feature duration (ie lengthen any short features to this duration, ~5s~)
- trimming for maximum feature duration (ie chop the end[s?] off anything longer than this, ~-1s~)
- reject/drop features shorter than (~0.1s~)
- reject/drop features longer than (~0.1s~)
- reject/drop features with a lower `score' than (~-1~)

Notes on options:

- All options should be optional due to defaults
- Not all options will apply to all FeatureExtractors (eg some may not produce a `score' or equivalent)
- Options that are not relevant to an encoder can be specified but will be ignored (consider emitting a ~WARN~ loglevel)

*** Class Sketch

#+begin_src plantuml :results output :file featureextractor.svg
scale 1000 height

abstract class FeatureExtractor {
{field} FeatureExtractorOptions options
{field} Logger logger

{abstract} setup()

{abstract} run()

{abstract} teardown()
}

'@dataclass
struct FeatureExtractorOptions {
+ working_directory : str
+ do_cleanup : boolean
+ log_level : int / enum
+ minimum_feature_padding : float
+ maximum_feature_trimming : float
+ reject_shorter_than : float
+ reject_longer_than : float
+ reject_scoring_less_than : float
}


FeatureExtractor::options <|-- FeatureExtractorOptions
#+end_src

#+RESULTS:
[[file:featureextractor.svg]]

** Consolidator

/(tl;dr: clustering? aggregation?)/

Basic approach: any time intervals produced earlier in the pipeline which overlap, or are within some specified /delta/ should be combined into one interval.

Overlap example:

~(10 - 15, 13 - 20) → (10-20)~

Delta = 5 example:

~(10 - 15, 18 - 25) → (10-25)~

Non-overlap, non-within-delta=5 example:

~(10 - 15, 21 - 30) → (10-15, 21-30)~

This is essentially reduction or transform on 1D data (time). It might make sense to consider two approaches (overlap, overlap after delta) separately.

Since taking any action on consolidation (or whatever term) is potentially making an inaccurate or unwarranted value/content decision†, the option to skip this stage entirely (or effectively, in the form of a ~Consolidator~ class which replicates input to output unchanged) should be included.

I am not sure if a ~Consolidator~ of any strategy should be permitted to output zero items / null. Similarly, I am not sure if trying to apply a ~Consolidator~ to any zero-sized / null set is well-defined.

Would using some kind of set theory definition be useful, or just a distraction?

*** Class sketch

#+begin_src plantuml :results output :file consolidator.svg
scale 1000 height

abstract class Consolidator {
ConsolidatorOptions options
{method} run()
}
#+end_src

#+RESULTS:
[[file:consolidator.svg]]

~Consolidator~:
- ConsolidatorOptions
+ ConsolidatorSpecificOptions
- (eg delta)

run()

** Other Operators

For example:

- ~Join~ (combine/group/associate time intervals -- ie produce one highlight video)

*Note*: this and the next step needs some thinking as to how the output would 'look' for being passed to VideoProducer. I had originally envisioned temporary files being written by intermediate stages, but I then hoped to avoid this and only `produce' a video at the last possible moment. This last part is notionally possible but may be introducing unwarranted complexity.

** VideoProducer

Approach: take definitions abolve and reify / actualise them- translate something along the lines of "take video /foo/bar.mp4 to produce and take segments A, B, C... and join them to produce a video file", expressed in representation/serialised class object/DSL-definition.

On an implementation level, translate what we have to call out to a program or API, eg ffmpeg MLT libavuser (etc).

Consideration: if video files (however temporary) can be produced earlier in the pipeline, there should perhaps be a ~VideoProducer~ that applies a 'nothing' definition -- that is, effectively it simply copies a (temporary) video to an output video (permanent).

** Post-Pipeline Actions

Placeholder, not sure of any yet (maybe show log or info? something user-friendly but technically optional?)

** Additional Classes
*** Logging

Setup on init- eg ~FileLogger(dest="/path/to/file.log")~ or on ~.setup()~ method ?

Used by classes via eg D-I.

Sketch:

#+BEGIN_SRC plantuml :results output :file /tmp/testuml.png
'!theme spacelab
scale 1000 height

FileLogger <|-- Logger
'note "throws LoggingError" as LE


abstract class Logger {
{abstract} void log()
}

note right of Logger::log()
throws LoggingError
end note

class FileLogger {
-_dest : String
}
#+END_SRC

#+RESULTS:
[[file:/tmp/testuml.png]]


*** Interval

Convenience class for highlights, around some data like:

#+begin_src json
{
"file": "/path/to/video",
"start": 10,
"end": 15,
"duration": 5,
"highlight_type": "laugh",
"score": 0.8,
}
#+end_src

Advantages include:

- can set start and duration or end
- makes it clearer what is being passed around

Disadvantages:

- class proliferation?

** Additional Considerations

Would it be desirable to add custom/user pre/post steps for each part of the pipeline?

Pros: lots of flexibility
Cons: complexity for ?practical benefit (WLtH)

** Overview / Recap

#+begin_src plantuml :results output :file pipeline-overview.svg
scale 1000 height
title Video Highlight Generation Pipeline
allowmixing

abstract class Logger {
{abstract} log()
}

actor User

User -> PrePipelineAction
PrePipelineAction -> InputFiles
InputFiles -> FeatureExtractor
FeatureExtractor -> Consolidator
Consolidator -> Operators
Consolidator -> VideoProducer
Operators -> VideoProducer
VideoProducer -> PostPipelineAction
VideoProducer -> User : <i>Output video(s)</i>
PostPipelineAction --> User : <i>Output video(s)</i>

abstract class PrePipelineAction {}

abstract class InputFiles {}

abstract class FeatureExtractor {}
#+end_src

#+RESULTS:
[[file:pipeline-overview.svg]]

#+begin_src plantuml :results output :file highlight-pipeline2.svg
scale 1000 height
!theme cerulean

actor User
action PrePipelineAction
process InputFiles
file Video as V1
file Video as V2
file Video as VN
process FeatureExtractor
collections Features
process Consolidator
collections "Consolidated Features" as ConsolidatedFeatures
process Operators
process VideoProducer
file Highlight as H1
file Highlight as H2
file Highlight as H3
process PostPipelineAction


User -> PrePipelineAction
PrePipelineAction -> InputFiles
InputFiles <.. V1
InputFiles <.. V2
InputFiles <.. VN

InputFiles -> FeatureExtractor
FeatureExtractor .. Features
FeatureExtractor -> Consolidator
Consolidator .. ConsolidatedFeatures
Consolidator -> Operators
Consolidator -> VideoProducer
Operators -> VideoProducer
Operators .. ConsolidatedFeatures
VideoProducer -> PostPipelineAction
VideoProducer ..> H1
VideoProducer ..> H2
VideoProducer ..> H3
#+end_src

#+RESULTS:
[[file:highlight-pipeline2.svg]]

+ 31
- 0
highlight-pipeline-userstories.org View File

@@ -0,0 +1,31 @@
* User Stories & Epics

** Magnum Epic

As a *person who produces gaming videos*,
I want to *automatically (or with minimal input) generate highlights from these videos*,
so that *I don't have to spend hours doing that myself*

TODO: rephrase last part.

** Adjust Feature-Selection Duration

Example context: an extracted feature (eg laughter) has too short a clip and the introduction to the joke was missed (or runs too long)

As *the person creating the videos*,
I want to *fix the duration of a particular segment*,
So that *the introduction to a joke wasn't missed in the highlight*

** Select / Reject Clips

Example context: clips selected by feature extractors are not wanted in the final highlight clip.

** Limit Input Clips

Example context: an input video has a part at the start or the end that the user doesn't want to include

As *the person supplying videos for highlights*,
I want to *exclude part of the source video(s)*,
So that *bits I don't want to be part of the output are not present*

Additional thought: what about the middle? Might it make sense to have full regions (intervals!) that are `blacklisted' from being part of the output?

+ 10
- 0
meetings.org View File

@@ -140,3 +140,13 @@ Next steps for coming week:

see [[file:highlight-pipeline-planning.org][highlight-pipeline-planning.org]]

*** Pipeline Class Sketches

see [[file:highlight-pipeline-classes.org]]

*** Pipeline User Stories

see [[file:highlight-pipeline-userstories.org]]


* Footnotes

Loading…
Cancel
Save