This information HAS errors and is made available WITHOUT ANY WARRANTY OF ANY KIND and without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. It is not permissible to be read by anyone who has ever met a lawyer or attorney. Use is confined to Engineers with more than 370 course hours of engineering.
If you see an error contact:
+1(785) 841 3089
inform@xtronics.com
Stabilization is best done as a two step processes - first filter is
stabilize which creates a transformation file - second step applies the
transformations.
filter_stabilize.so v0.75 (2010-04-07) extracts relative transformations of subsequent frames (used for stabilization together with the transform filter in a second pass) Generates a file with relative transform information (translation, rotation) about subsequent frames. See transform below. Options 'result' path to the file used to write the transforms (default:inputfile.trf) 'shakiness' how shaky is the video and how quick is the camera? 1: little (fast) 10: very strong/quick (slow) (default: 4)
'accuracy' accuracy of detection process (>=shakiness)
1: low (fast) 15: high (slow) (default: 4) 'stepsize' stepsize of search process, region around minimum (lower steps are better at the cost of processing time ) is scanned with 1 pixel resolution (default: 6) 'algo' 0: brute force (translation only); (no rotation) 1: small measurement fields (default) ( better and what almost all will use ) 'mincontrast' below this contrast the tracking-section area are discarded (0-1) (default: 0.3)
'show' 0: draw nothing (default); 1,2: show fields and transforms in the resulting frames. Consider the 'preview' filter 'help' print this help message
There is a grid of tracking-section rectangles (number and size calculated from shakiness and
video dimensions). Then the contrast and some other magic (to have them possibly spread nicely) are used to select some of them (accuracy).
Statistical techniques are used to generate the translations. The upper and lower percentile are thrown out and an average is calculated. Small moving areas are ignored.
filter_transform.so v0.75 (2009-10-31) transforms each frame according to transformations given in an input file (e.g. translation, rotate) see also filter stabilize Reads a file with transform information ( produced by stabilize above) for each frame and applies it. Options 'input' path to the file used to read the transforms (default: inputfile.trf) 'smoothing' number of frames*2 + 1 used for lowpass filtering of the frame velocity (more makes the change of frame move slower) used for stabilizing (default: 10)
'maxshift' maximal number of pixels to translate image (default: -1 no limit) 'maxangle' maximal angle in rad to rotate image (default: -1 no limit) 'crop' 0: keep border (default), 1: black background 'invert' 1: invert transforms(default: 0) 'relative' consider transforms as 0: absolute, 1: relative (default) 'zoom' percentage to zoom >0: zoom in, <0 zoom out (default: 0) 'optzoom' 0: nothing, 1: determine optimal zoom (default) i.e. no (or only little) border should be visible. Note that the value given at 'zoom' is added to the here calculated one 'interpol' type of interpolation: 0: no interpolation, 1: linear (horizontal) (default), 2: bi-linear 3: quadratic 'sharpen' amount of sharpening: 0: no sharpening (default: 0.8) ( Sharpen may help fix up the artifacts of the interpolations ) uses filter unsharp with 5x5 matrix 'help' print this help message
This is a setting attempt for a static block shot with hand-held vibration
#!/bin/bash IN="2012-05-18_09-02-28" EXT=".mov" OUT="$IN-stab$EXT" #first pass transcode -J stabilize=shakiness=3:accuracy=15:stepsize=1:mincontrast=0.0 -i $IN$EXT -y null,null -o dummy #second pass transcode \ -J transform=sharpen=0:smoothing=1000 \ -i $IN$EXT \ -o $OUT \ -Q 5,5 \ -F huffyuv \ -y ffmpeg,tcaud
Top Page | wiki Index |
(C) Copyright 1994-2019
All trademarks are the property of their respective owners.