Skip to content

Draft: captouch: normalize positional data

moon2 requested to merge positional_captouch_normalize into main

just messing around w ideas rn, this is mostly here to indicate that it's in our pipeline :D

basic thoughts:

  • existing API has p bad value ranges, apps use those, the only clean way out is to make a new API with entirely different ranges
  • positional captouch data is generally rough around the edges, different uses cases need different post-processing. example: if you want to input a value with a precision of 1% full range with a slider you need heavy smoothing, but also you need to filter out the last value generated before release because the lift-off generates a random-ish shift in the data (see processing in !661). this setup is of course entirely unsuitable if noise is okay but you want an immediate response (for example otamatone), so hardcoding this one filter is not a solution. if we want positional captouch to be more easily usable for application programmers it makes sense to provide API that does this in a configurable manner so that they don't have to discover and mitigate these side effects on their own.
  • this could be done as an additional wrapper in micropython, but we believe it's better go bake it deep into the C backend. This allows us to use all available data optimally: low think rates don't result in dropped samples for averaging, we could at some point switch use a smarter method than "drop the last sample" for lift-off data quality gating without exposing questionable intermediate user API, etc.
  • by simply keeping historical data in the captouch data we can do all these things on a singleton.

we think ideally the new positional API should not be an attribute but a method with optional parameters. our target is something roughly around the lines of:

# slider setup
nice_data = petal[i].get_position(smooth = 5, release_filter = True, press_filter = False) # all optional

we'd normalize all data output to [-1..1].

Edited by moon2

Merge request reports