Clips

Clip type hierarchy and factory function.

Usage:

from camtasia.timeline.clips import clip_from_dict

clip = clip_from_dict(raw_dict)
class camtasia.timeline.clips.BaseClip(data)[source]

Bases: object

Base class for all timeline clip types.

Wraps a reference to the underlying JSON dict. Mutations go directly to the dict so project.save() always writes the current state.

Parameters:

data (dict[str, Any]) – The raw clip dict from the project JSON.

property id: int

Unique clip ID.

property clip_type: str

The _type string (e.g. 'AMFile', 'VMFile').

property is_audio: bool

Whether this clip is an audio clip.

property is_video: bool

Whether this clip is a video clip.

property is_visible: bool

Whether this clip is a visual clip (not audio-only).

property is_image: bool

Whether this clip is an image clip.

property is_group: bool

Whether this clip is a group clip.

property is_callout: bool

Whether this clip is a callout clip.

property is_stitched: bool

Whether this clip is a stitched media clip.

property is_placeholder: bool

Whether this clip is a placeholder clip.

property start: int

Timeline position in ticks.

property duration: int

Playback duration in ticks.

property end_seconds: float

End time in seconds (start + duration).

property time_range: tuple[float, float]

(start_seconds, end_seconds) tuple.

property time_range_formatted: str

SS - MM:SS’ string.

Type:

Time range as ‘MM

property gain: float

Audio gain (0.0 = muted, 1.0 = full volume).

is_at(time_seconds)[source]

Whether this clip spans the given time point.

Return type:

bool

is_between(range_start_seconds, range_end_seconds)[source]

Whether this clip falls entirely within the given time range.

Return type:

bool

intersects(range_start_seconds, range_end_seconds)[source]

Whether this clip overlaps with the given time range at all.

Return type:

bool

property is_muted: bool

Whether this clip’s audio is muted (gain == 0).

mute()[source]

Mute this clip’s audio by setting gain to 0.

Return type:

Self

Returns:

self for chaining.

property media_start: int | float | str | Fraction

Offset into source media in ticks.

May be a rational fraction string for speed-changed clips.

property media_duration: int | float | str | Fraction

Source media window in ticks.

property scalar: Fraction

Speed scalar as a Fraction.

Parses from int, float, or string like '51/101'.

set_speed(speed)[source]

Set playback speed multiplier.

Parameters:

speed (float) – Speed multiplier (1.0 = normal, 2.0 = double speed, 0.5 = half speed).

Return type:

Self

property speed: float

Current playback speed multiplier.

property has_effects: bool

Whether this clip has any effects applied.

property effect_count: int

Number of effects on this clip.

property keyframe_count: int

Total number of keyframes across all parameters.

property is_at_origin: bool

Whether this clip starts at time 0.

property effect_names: list[str]

Names of all effects on this clip.

property effects: list[dict[str, Any]]

Raw effect dicts (will be wrapped by the effects module later).

remove_effect_by_name(effect_name)[source]

Remove all effects with the given name. Returns count removed.

Return type:

int

is_effect_applied(effect_name)[source]

Check if a specific effect is applied to this clip.

Parameters:

effect_name (str | EffectName) – The effect name string or EffectName enum member.

Return type:

bool

Returns:

True if at least one effect with the given name exists on this clip.

property parameters: dict[str, Any]

Clip parameters dict.

property opacity: float

Clip opacity (0.0–1.0).

property volume: float

Audio volume (>= 0.0).

property is_silent: bool

Whether this clip has zero volume (gain == 0 or volume == 0).

property metadata: dict[str, Any]

Clip metadata dict.

set_metadata(metadata_key, metadata_value)[source]

Set a metadata value on this clip.

Return type:

Self

get_metadata(metadata_key, default=None)[source]

Get a metadata value from this clip.

Return type:

Any

clear_metadata()[source]

Remove all metadata from this clip.

Return type:

Self

Returns:

self for chaining.

property animation_tracks: dict[str, Any]

Animation tracks dict.

property visual_animations: list[dict[str, Any]]

Visual animation array from animationTracks.

property source_id: int | None

Source bin ID (src field), or None if absent.

set_source(source_id)[source]

Change the media source reference for this clip.

Return type:

Self

property source_effect: dict[str, Any] | None

Source effect applied to this clip, or None.

set_source_effect(*, color0=None, color1=None, color2=None, color3=None, mid_point=0.5, speed=5.0, source_file_type='tscshadervid')[source]

Create or replace the clip’s sourceEffect for shader backgrounds.

Colors are 0-255 RGB tuples. They’re converted to 0.0-1.0 internally.

Return type:

None

property start_seconds: float

Timeline position in seconds.

property duration_seconds: float

Playback duration in seconds.

is_shorter_than(threshold_seconds)[source]

Whether this clip’s duration is less than the given threshold.

Return type:

bool

set_start_seconds(start_seconds)[source]

Set the clip start position in seconds.

Parameters:

start_seconds (float) – New start position in seconds.

Return type:

Self

Returns:

Self for method chaining.

set_duration_seconds(duration_seconds)[source]

Set the clip duration in seconds.

Parameters:

duration_seconds (float) – New duration in seconds.

Return type:

Self

Returns:

Self for method chaining.

set_time_range(start_seconds, duration_seconds)[source]

Set both start position and duration in seconds.

Returns self for chaining.

Return type:

Self

copy_effects_from(source)[source]

Copy all effects from another clip.

Deep copies the source clip’s effects array into this clip. Existing effects on this clip are preserved (new effects appended).

Parameters:

source (BaseClip) – Clip to copy effects from.

Return type:

Self

Returns:

self for chaining.

duplicate_effects_to(target_clip)[source]

Copy all effects from this clip to another clip.

Convenience wrapper around copy_effects_from() that reads from self and writes to target_clip.

Parameters:

target_clip (BaseClip) – Clip that will receive this clip’s effects.

Return type:

Self

Returns:

self for chaining.

add_glow_timed(start_seconds, duration_seconds, radius=35.0, intensity=0.35, fade_in_seconds=0.4, fade_out_seconds=1.0)[source]

Add a time-bounded glow effect with fade-in/out.

Parameters:
  • start_seconds (float) – Effect start relative to clip, in seconds.

  • duration_seconds (float) – Effect duration in seconds.

  • radius (float) – Glow radius.

  • intensity (float) – Glow intensity.

  • fade_in_seconds (float) – Fade-in duration in seconds.

  • fade_out_seconds (float) – Fade-out duration in seconds.

Return type:

Glow

Returns:

The created Glow effect.

fade_in(duration_seconds)[source]

Add an opacity fade-in (0 → 1) over duration_seconds.

If a fade-out already exists, merges into a single unified animation.

Parameters:

duration_seconds (float) – Fade duration in seconds.

Return type:

Self

Returns:

self for chaining.

fade_out(duration_seconds)[source]

Add an opacity fade-out (1 → 0) ending at the clip’s end.

If a fade-in already exists, merges into a single unified animation.

Parameters:

duration_seconds (float) – Fade duration in seconds.

Return type:

Self

Returns:

self for chaining.

fade(fade_in_seconds=0.0, fade_out_seconds=0.0)[source]

Apply fade-in and/or fade-out, replacing existing opacity animations.

Uses the Camtasia v10 keyframe pattern: each keyframe specifies a target opacity value, and its duration defines the animation period.

Parameters:
  • fade_in_seconds (float) – Fade-in duration (0 to skip).

  • fade_out_seconds (float) – Fade-out duration (0 to skip).

Return type:

Self

Returns:

self for chaining.

set_opacity(opacity)[source]

Set a static opacity for the entire clip.

Parameters:

opacity (float) – Opacity value (0.0–1.0).

Return type:

Self

Returns:

self for chaining.

clear_animations()[source]

Remove all visual animation entries from the clip.

Return type:

Self

Returns:

self for chaining.

add_effect(effect_data)[source]

Append a raw effect dict to this clip’s effects list.

Parameters:

effect_data (dict[str, Any]) – A complete Camtasia effect dict.

Return type:

Effect

Returns:

Wrapped Effect instance.

add_drop_shadow(offset=5, blur=10, opacity=0.5, angle=5.5, color=(0, 0, 0), enabled=1)[source]

Add a drop-shadow effect.

Parameters:
  • offset (float) – Shadow offset distance.

  • blur (float) – Blur radius.

  • opacity (float) – Shadow opacity (0.0–1.0).

  • angle (float) – Shadow angle in degrees.

  • color (tuple[float, float, float]) – RGB colour tuple.

  • enabled (int) – Whether the shadow is enabled (1=on, 0=off).

Return type:

Effect

Returns:

Wrapped DropShadow effect.

add_glow(radius=35.0, intensity=0.35)[source]

Add a glow/bloom effect.

Parameters:
  • radius (float) – Glow radius.

  • intensity (float) – Glow intensity.

Return type:

Effect

Returns:

Wrapped Glow effect.

add_round_corners(radius=12.0)[source]

Add a rounded-corners effect.

Parameters:

radius (float) – Corner radius.

Return type:

Effect

Returns:

Wrapped RoundCorners effect.

add_color_adjustment(*, brightness=0.0, contrast=0.0, saturation=1.0, channel=0, shadow_ramp_start=0.0, shadow_ramp_end=0.0, highlight_ramp_start=1.0, highlight_ramp_end=1.0)[source]

Add a color adjustment effect.

Parameters:
  • brightness (float) – -1.0 to 1.0 (0 = no change).

  • contrast (float) – -1.0 to 1.0 (0 = no change).

  • saturation (float) – 0.0 to 3.0 (1.0 = no change).

  • channel (int) – Color channel (0 = all).

  • shadow_ramp_start (float) – Shadow ramp start (0.0-1.0).

  • shadow_ramp_end (float) – Shadow ramp end (0.0-1.0).

  • highlight_ramp_start (float) – Highlight ramp start (0.0-1.0).

  • highlight_ramp_end (float) – Highlight ramp end (0.0-1.0).

Return type:

Self

add_border(*, width=4.0, color=(1.0, 1.0, 1.0, 1.0), corner_radius=0.0)[source]

Add a border effect.

Parameters:
Return type:

Self

add_colorize(*, color=(0.5, 0.5, 0.5), intensity=0.5)[source]

Add a colorize/tint effect.

Parameters:
Return type:

Self

add_spotlight(*, brightness=0.5, concentration=0.5, opacity=0.35, color=(1.0, 1.0, 1.0, 0.35))[source]

Add a spotlight effect.

Return type:

Self

add_lut_effect(*, intensity=1.0, preset_name='')[source]

Add a color LUT (Look-Up Table) effect.

Parameters:
  • intensity (float) – Effect intensity 0.0-1.0.

  • preset_name (str) – Optional preset name for metadata.

Return type:

Self

add_media_matte(*, intensity=1.0, matte_mode=1, track_depth=10002, preset_name='Media Matte Luminasity')[source]

Add a media matte compositing effect.

Uses one track as a transparency mask for this clip.

Parameters:
  • intensity (float) – Effect intensity 0.0-1.0.

  • matte_mode (int) – Matte mode (1 = alpha, 2 = inverted alpha).

  • track_depth (int) – Track depth for matte source.

  • preset_name (str) – Preset name for metadata.

Return type:

Self

add_motion_blur(*, intensity=1.0)[source]

Add a motion blur effect.

Return type:

Self

add_emphasize(*, amount=0.5)[source]

Add an audio emphasis effect.

Parameters:

amount (float) – Emphasis amount 0.0-1.0.

Return type:

Self

add_blend_mode(*, mode=BlendMode.NORMAL, intensity=1.0)[source]

Add a blend mode compositing effect.

Parameters:
  • mode (int | BlendMode) – Blend mode (3=multiply, 16=normal, etc.).

  • intensity (float) – Effect intensity 0.0-1.0.

Return type:

Self

remove_effects()[source]

Remove all effects from this clip.

Return type:

Self

Returns:

self for chaining.

property translation: tuple[float, float]

(x, y) translation.

property scale: tuple[float, float]

(x, y) scale factors.

property rotation: float

Z-rotation in radians (stored as rotation1).

move_to(x, y)[source]

Set the clip’s canvas translation.

Return type:

Self

Returns:

self for chaining.

scale_to(factor)[source]

Set uniform scale on both axes.

Return type:

Self

Returns:

self for chaining.

scale_to_xy(x, y)[source]

Set non-uniform scale.

Return type:

Self

Returns:

self for chaining.

crop(left=0, top=0, right=0, bottom=0)[source]

Set geometry crop values (non-negative floats, pixel or fractional).

Return type:

Self

Returns:

self for chaining.

add_keyframe(parameter, time_seconds, value, duration_seconds=0.0, interp='eioe')[source]

Add a keyframe to a clip parameter.

Return type:

Self

Returns:

self for chaining.

summary()[source]

Human-readable clip summary.

Return type:

str

describe()[source]

Human-readable clip description.

Return type:

str

clone()[source]

Create a deep copy of this clip with a new ID.

Return type:

BaseClip

clear_keyframes(parameter=None)[source]

Remove keyframes from a parameter, or all parameters if parameter is None.

Return type:

Self

Returns:

self for chaining.

reset_transforms()[source]

Reset position, scale, and rotation to defaults.

Return type:

Self

remove_all_effects()[source]

Remove all effects from this clip.

Return type:

Self

set_opacity_fade(start_opacity=1.0, end_opacity=0.0, duration_seconds=None)[source]

Add an opacity fade keyframe animation.

Return type:

Self

set_position_keyframes(keyframes)[source]

Set position keyframes for animated movement.

Parameters:

keyframes (list[tuple[float, float, float]]) – List of (time_seconds, x, y) tuples.

Return type:

Self

set_scale_keyframes(keyframes)[source]

Set scale keyframes for animated scaling.

Parameters:

keyframes (list[tuple[float, float]]) – List of (time_seconds, scale) tuples.

Return type:

Self

set_rotation_keyframes(keyframes)[source]

Set rotation keyframes for animated rotation.

Parameters:

keyframes (list[tuple[float, float]]) – List of (time_seconds, rotation_degrees) tuples.

Return type:

Self

set_crop_keyframes(keyframes)[source]

Set crop keyframes for animated cropping.

Parameters:

keyframes (list[tuple[float, float, float, float, float]]) – List of (time_seconds, left, top, right, bottom) tuples. Values 0.0-1.0.

Return type:

Self

set_volume_fade(start_volume=1.0, end_volume=0.0, duration_seconds=None)[source]

Add a volume fade keyframe animation.

Return type:

Self

animate(*, fade_in=0.0, fade_out=0.0, scale_from=None, scale_to=None, move_from=None, move_to=None)[source]

Apply common animations in one call.

Parameters:
  • fade_in (float) – Fade-in duration in seconds (0 = no fade).

  • fade_out (float) – Fade-out duration in seconds (0 = no fade).

  • scale_from (float | None) – Starting scale (None = no scale animation).

  • scale_to (float | None) – Ending scale (None = no scale animation).

  • move_from (tuple[float, float] | None) – Starting (x, y) position (None = no movement).

  • move_to (tuple[float, float] | None) – Ending (x, y) position (None = no movement).

Return type:

Self

to_dict()[source]

Return a summary dict of this clip’s key properties.

Return type:

dict[str, Any]

property source_path: int | str

Source bin ID (int) or empty string if absent (from the ‘src’ field).

property media_start_seconds: float

Media start offset in seconds.

overlaps_with(other_clip)[source]

Check if this clip’s time range overlaps with another clip.

Return type:

bool

distance_to(other_clip)[source]

Gap in seconds between this clip and another (negative if overlapping).

Return type:

float

property has_keyframes: bool

Whether any parameter has keyframe animation.

clear_all_keyframes()[source]

Remove keyframes from ALL parameters, keeping default values.

Return type:

Self

copy_timing_from(source_clip)[source]

Copy start time and duration from another clip.

Return type:

Self

matches_type(clip_type)[source]

Check if this clip matches the given type.

Return type:

bool

matches_any_type(*clip_types)[source]

Check if this clip matches any of the given types.

Return type:

bool

snap_to_seconds(target_start_seconds)[source]

Move this clip to start at the given time in seconds.

Return type:

Self

is_longer_than(threshold_seconds)[source]

Whether this clip’s duration exceeds the given threshold.

Return type:

bool

apply_if(predicate, operation)[source]

Apply an operation only if the predicate is true for this clip.

Return type:

Self

copy_to_track(target_track)[source]

Copy this clip to another track, preserving timing and effects.

Creates a deep copy of the clip data, assigns a new ID from the target track, and appends it to the target track’s media list.

Parameters:

target_track (Track) – The track to copy this clip into.

Return type:

BaseClip

Returns:

The newly created clip on the target track.

class camtasia.timeline.clips.AMFile(data)[source]

Bases: BaseClip

Audio media file clip.

Wraps an AMFile JSON dict. Adds audio-specific properties for channel selection, gain, and loudness normalization.

Parameters:

data (dict[str, Any]) – The raw clip dict.

property channel_number: str

Channel number string (e.g. '0', '0,1').

property attributes: dict[str, Any]

Audio attributes dict (ident, gain, mixToMono, etc.).

property gain: float

Audio gain multiplier.

property loudness_normalization: bool

Whether loudness normalization is enabled.

property is_muted: bool

Whether the clip’s gain is zero.

normalize_gain(target_db=-23.0)[source]

Set loudness normalization target.

Camtasia uses LUFS for loudness normalization. Common targets: -23 LUFS (EBU R128), -16 LUFS (podcast).

Parameters:

target_db (float) – Target loudness in LUFS (default -23.0).

Return type:

Self

Returns:

self for chaining.

set_gain(gain)[source]

Set the audio gain (volume multiplier).

Parameters:

gain (float) – Volume multiplier (0.0 = silent, 1.0 = normal, 2.0 = double).

Return type:

Self

Returns:

self for chaining.

class camtasia.timeline.clips.VMFile(data)[source]

Bases: BaseClip

Video media file clip.

Minimal wrapper — video clips use mostly BaseClip properties.

Parameters:

data (dict[str, Any]) – The raw clip dict.

class camtasia.timeline.clips.IMFile(data)[source]

Bases: BaseClip

Image media file clip.

Inherits translation, scale, crop, and other transform helpers from BaseClip. Adds a read-only geometry_crop convenience property.

Parameters:

data (dict[str, Any]) – The raw clip dict.

property geometry_crop: dict[str, float]

Geometry crop values (keys 0 through 3).

class camtasia.timeline.clips.ScreenVMFile(data)[source]

Bases: BaseClip

Screen recording video clip.

Inherits translation, scale, and other transform helpers from BaseClip. Adds cursor effect properties.

Parameters:

data (dict[str, Any]) – The raw clip dict.

property cursor_scale: float

Cursor enlargement factor.

property cursor_opacity: float

Cursor opacity (0.0–1.0).

property cursor_track_level: float

Cursor track level.

property smooth_cursor_across_edit_duration: float

Smooth cursor across edit duration setting.

property cursor_motion_blur_intensity: float

CursorMotionBlur intensity.

property cursor_shadow: dict[str, float]

CursorShadow parameters.

property cursor_physics: dict[str, float]

CursorPhysics parameters (intensity, tilt).

property left_click_scaling: dict[str, float]

LeftClickScaling parameters (scale, speed).

class camtasia.timeline.clips.ScreenIMFile(data)[source]

Bases: BaseClip

Screen recording cursor overlay clip.

Contains per-frame cursor position keyframes.

Parameters:

data (dict[str, Any]) – The raw clip dict.

property cursor_image_path: str | None

Cursor image path identifier.

property cursor_location_keyframes: list[dict[str, Any]]

Cursor location keyframes.

Returns:

List of dicts with time, endTime, value, duration keys. value is [x, y, z].

class camtasia.timeline.clips.StitchedMedia(data)[source]

Bases: BaseClip

Container for multiple spliced segments from the same source.

The parent mediaStart/duration defines a window into the child timeline formed by the medias array.

Parameters:

data (dict[str, Any]) – The raw clip dict.

property nested_clips: list[BaseClip]

Child clip segments.

Returns:

List of typed clip instances created via clip_from_dict.

property attributes: dict[str, Any]

Clip attributes dict.

property segment_count: int

Number of nested clip segments.

property min_media_start: int

Minimum media start offset in frames.

clear_segments()[source]

Remove all nested segments.

Return type:

None

class camtasia.timeline.clips.PlaceholderMedia(data)[source]

Bases: BaseClip

A placeholder clip for missing or to-be-added media.

property subtitle: str

Subtitle text for the placeholder clip.

property width: float

Width of the placeholder in pixels.

property height: float

Height of the placeholder in pixels.

class camtasia.timeline.clips.Group(data)[source]

Bases: BaseClip

Compound clip containing its own internal tracks.

Parameters:

data (dict[str, Any]) – The raw clip dict.

property tracks: list[GroupTrack]

Internal tracks, each with their own clips.

property clip_count: int

Total number of clips across all internal tracks.

add_internal_track()[source]

Add a new empty internal track to this Group.

Return type:

GroupTrack

Returns:

The newly created GroupTrack.

ungroup()[source]

Extract all internal clips as a flat list.

Returns the clips with their start times adjusted to be relative to the Group’s position on the timeline. Internal clip data is deep-copied so the Group’s own state is never mutated.

Return type:

list[BaseClip]

Returns:

List of clips with timeline-absolute start positions.

property attributes: dict[str, Any]

Group attributes dict (ident, widthAttr, heightAttr).

property ident: str

Group name / identifier.

property width: float

Group width.

property height: float

Group height.

property is_screen_recording: bool

Return True if this group contains screen recording media.

property internal_media_src: int | None

Return the source ID of the internal screen recording media, or None.

find_internal_clip(clip_type)[source]

Find the first internal clip matching the given type string.

Return type:

BaseClip | None

property all_internal_clips: list[BaseClip]

All clips across all internal tracks (flat list).

property internal_clip_types: set[str]

Set of unique clip types across all internal tracks.

property has_audio: bool

Whether any internal clip is an audio clip.

property has_video: bool

Whether any internal clip is a video clip.

property internal_duration_seconds: float

Duration of the longest internal track in seconds.

find_internal_clips_by_type(clip_type)[source]

Find all internal clips of a specific type.

Parameters:

clip_type (str | ClipType) – Clip type string or ClipType enum value.

Return type:

list[BaseClip]

Returns:

List of matching clips across all internal tracks.

remove_internal_clip(clip_id)[source]

Remove a clip from any internal track by ID.

Cascade-deletes any transitions referencing the removed clip.

Parameters:

clip_id (int) – The id of the internal clip to remove.

Raises:

KeyError – If no internal clip with the given ID exists.

Return type:

None

clear_all_internal_clips()[source]

Remove all clips from all internal tracks.

Cascade-deletes all transitions on every internal track.

Return type:

int

Returns:

The total number of clips removed.

set_dimensions(width_pixels, height_pixels)[source]

Set the Group’s width and height attributes.

Parameters:
  • width_pixels (float) – New width value.

  • height_pixels (float) – New height value.

Return type:

Self

Returns:

self for fluent chaining.

rename(new_name)[source]

Rename this Group.

Parameters:

new_name (str) – The new identifier for this Group.

Return type:

Self

Returns:

self for fluent chaining.

merge_internal_tracks()[source]

Merge all internal tracks into a single track.

Moves every clip from tracks[1:] into tracks[0], then removes the extra tracks. If the group has no tracks, a new empty one is created.

Return type:

GroupTrack

Returns:

The surviving (first) GroupTrack containing all clips.

describe()[source]

Human-readable Group description.

Return type:

str

set_internal_segment_speeds(segments, *, next_id=None, canvas_width=None, canvas_height=None)[source]

Replace the internal track’s media with per-segment StitchedMedia clips.

Each segment maps a slice of the source recording to a timeline duration, allowing different playback speeds per segment.

Uses the Camtasia StitchedMedia format reverse-engineered from v2 projects: each StitchedMedia clip on the Group’s internal track has its own scalar, mediaStart, and nested ScreenVMFile + ScreenIMFile children.

Parameters:
  • segments (list[tuple[float, float, float]]) – List of (source_start_s, source_end_s, timeline_duration_s) tuples.

  • next_id (int | None) – Starting ID for generated clips. If None, auto-detects from existing internal clip IDs.

  • canvas_width (float | None) – Optional width to set on each created ScreenVMFile clip. When provided, overrides the source recording’s native width so the clip fits the project canvas (e.g. 1920 for a Retina recording).

  • canvas_height (float | None) – Optional height to set on each created ScreenVMFile clip.

Return type:

None

class camtasia.timeline.clips.GroupTrack(data)[source]

Bases: object

A track inside a Group clip.

Parameters:

data (dict[str, Any]) – The raw track dict from the Group’s tracks array.

property track_index: int

Track index within the group.

property clips: list[BaseClip]

Clips on this group track.

Returns:

List of typed clip instances created via clip_from_dict.

property parameters: dict[str, Any]

Track parameters dict.

property transitions: TransitionList

Transitions on this internal track.

add_clip(clip_type, source_id, start_ticks, duration_ticks, *, next_id=None, **extra_fields)[source]

Add a clip to this internal group track.

Parameters:
  • clip_type (str) – The _type value (e.g. 'AMFile', 'VMFile').

  • source_id (int | None) – Source bin ID, or None for callouts/groups.

  • start_ticks (int) – Timeline position in ticks (group-relative).

  • duration_ticks (int) – Playback duration in ticks.

  • next_id (int | None) – Explicit clip ID to use. Pass project.next_available_id for global uniqueness. If None, uses local max+1 (unique within this track only).

  • **extra_fields (Any) – Additional fields merged into the clip dict.

Return type:

BaseClip

Returns:

The newly created typed clip object.

class camtasia.timeline.clips.Callout(data)[source]

Bases: BaseClip

Text overlay / annotation clip.

The callout definition lives in the def key of the clip dict.

Parameters:

data (dict[str, Any]) – The raw clip dict.

property definition: dict[str, Any]

The full callout def dict.

property text: str

Callout text content.

property font: dict[str, Any]

Font definition dict.

property kind: str

Callout kind (e.g. 'remix').

property shape: str

Callout shape (e.g. 'text').

property style: str

Callout style (e.g. 'basic').

property width: float

Callout width.

property height: float

Callout height.

property horizontal_alignment: str

Horizontal text alignment (e.g. 'center').

property fill_color: tuple[float, float, float, float]

Fill color as (r, g, b, opacity).

property stroke_color: tuple[float, float, float, float]

Stroke color as (r, g, b, opacity).

property corner_radius: float

Corner radius for rounded shapes.

property tail_position: tuple[float, float]

Tail position as (x, y).

set_font(name, weight='Regular', size=64.0)[source]

Update the callout’s font properties.

Parameters:
  • name (str) – Font family name (e.g. 'Arial').

  • weight (str) – Font weight (e.g. 'Regular', 'Bold').

  • size (float) – Font size in points.

Return type:

Self

Returns:

Self for chaining.

set_colors(fill=None, stroke=None, font_color=None)[source]

Set fill, stroke, and/or font RGBA colors.

Parameters:
Return type:

Self

Returns:

Self for chaining.

resize(width, height)[source]

Set callout dimensions.

Parameters:
  • width (float) – New width.

  • height (float) – New height.

Return type:

Self

Returns:

Self for chaining.

position(x, y)[source]

Set the callout position.

Deprecated since version Use: move_to() instead (inherited from BaseClip).

Return type:

Self

set_alignment(horizontal, vertical)[source]

Set text alignment.

Parameters:
  • horizontal (str) – Horizontal alignment (e.g. 'center', 'left').

  • vertical (str) – Vertical alignment (e.g. 'center', 'top').

Return type:

Self

Returns:

Self for chaining.

set_size(width, height)[source]

Set callout dimensions and enable text resizing.

Parameters:
  • width (float) – Callout width.

  • height (float) – Callout height.

Return type:

Self

Returns:

Self for chaining.

add_behavior(preset=BehaviorPreset.REVEAL)[source]

Add a text behavior animation effect.

Parameters:

preset (str | BehaviorPreset) – Behavior preset name ('Reveal', 'Sliding').

Return type:

Self

Returns:

Self for chaining.

class camtasia.timeline.clips.CalloutBuilder(text)[source]

Bases: object

Fluent builder for creating styled Callout clips.

Usage:

builder = CalloutBuilder(‘Hello World’) builder.font(‘Montserrat’, weight=700, size=48) builder.color(fill=(0, 0, 0, 255), font=(255, 255, 255, 255)) builder.position(100, 200) builder.size(400, 100) # Then pass builder to track.add_callout_from_builder()

font(name='Montserrat', *, weight=400, size=36.0)[source]

Set font properties.

Return type:

CalloutBuilder

color(*, fill=None, font=None, stroke=None)[source]

Set colors as RGBA 0-255 tuples.

Return type:

CalloutBuilder

position(x, y)[source]

Set canvas position.

Return type:

CalloutBuilder

size(width, height)[source]

Set dimensions.

Return type:

CalloutBuilder

alignment(align)[source]

Set horizontal alignment (‘left’, ‘center’, ‘right’).

Return type:

CalloutBuilder

class camtasia.timeline.clips.UnifiedMedia(data)[source]

Bases: BaseClip

A clip bundling video and audio from the same source (e.g., Camtasia Rev).

Contains a video child and an audio child, both referencing the same .trec source file. The video child is either a ScreenVMFile (screen recording) or a VMFile (camera recording).

property video: BaseClip

The video child clip (ScreenVMFile or VMFile).

property audio: BaseClip

The audio child clip (AMFile).

property has_audio: bool

Whether this unified media contains an audio track.

property is_screen_recording: bool

Whether the video child is a screen recording (vs camera).

property is_camera: bool

Whether the video child is a camera recording.

property source_id: int | None

Source bin ID from the video child.

mute_audio()[source]

Set audio gain to zero.

Return type:

Self

camtasia.timeline.clips.clip_from_dict(data)[source]

Create the appropriate clip subclass from a JSON dict.

Parameters:

data (dict[str, Any]) – Raw clip dict containing at least an _type key.

Return type:

BaseClip

Returns:

A typed clip instance (AMFile, VMFile, etc.), or BaseClip if the type is unrecognised.

Base clip class wrapping the underlying JSON dict.

camtasia.timeline.clips.base.EDIT_RATE = 705600000

Ticks per second. Divisible by 30fps, 60fps, 44100Hz, 48000Hz.

class camtasia.timeline.clips.base.BaseClip(data)[source]

Bases: object

Base class for all timeline clip types.

Wraps a reference to the underlying JSON dict. Mutations go directly to the dict so project.save() always writes the current state.

Parameters:

data (dict[str, Any]) – The raw clip dict from the project JSON.

property id: int

Unique clip ID.

property clip_type: str

The _type string (e.g. 'AMFile', 'VMFile').

property is_audio: bool

Whether this clip is an audio clip.

property is_video: bool

Whether this clip is a video clip.

property is_visible: bool

Whether this clip is a visual clip (not audio-only).

property is_image: bool

Whether this clip is an image clip.

property is_group: bool

Whether this clip is a group clip.

property is_callout: bool

Whether this clip is a callout clip.

property is_stitched: bool

Whether this clip is a stitched media clip.

property is_placeholder: bool

Whether this clip is a placeholder clip.

property start: int

Timeline position in ticks.

property duration: int

Playback duration in ticks.

property end_seconds: float

End time in seconds (start + duration).

property time_range: tuple[float, float]

(start_seconds, end_seconds) tuple.

property time_range_formatted: str

SS - MM:SS’ string.

Type:

Time range as ‘MM

property gain: float

Audio gain (0.0 = muted, 1.0 = full volume).

is_at(time_seconds)[source]

Whether this clip spans the given time point.

Return type:

bool

is_between(range_start_seconds, range_end_seconds)[source]

Whether this clip falls entirely within the given time range.

Return type:

bool

intersects(range_start_seconds, range_end_seconds)[source]

Whether this clip overlaps with the given time range at all.

Return type:

bool

property is_muted: bool

Whether this clip’s audio is muted (gain == 0).

mute()[source]

Mute this clip’s audio by setting gain to 0.

Return type:

Self

Returns:

self for chaining.

property media_start: int | float | str | Fraction

Offset into source media in ticks.

May be a rational fraction string for speed-changed clips.

property media_duration: int | float | str | Fraction

Source media window in ticks.

property scalar: Fraction

Speed scalar as a Fraction.

Parses from int, float, or string like '51/101'.

set_speed(speed)[source]

Set playback speed multiplier.

Parameters:

speed (float) – Speed multiplier (1.0 = normal, 2.0 = double speed, 0.5 = half speed).

Return type:

Self

property speed: float

Current playback speed multiplier.

property has_effects: bool

Whether this clip has any effects applied.

property effect_count: int

Number of effects on this clip.

property keyframe_count: int

Total number of keyframes across all parameters.

property is_at_origin: bool

Whether this clip starts at time 0.

property effect_names: list[str]

Names of all effects on this clip.

property effects: list[dict[str, Any]]

Raw effect dicts (will be wrapped by the effects module later).

remove_effect_by_name(effect_name)[source]

Remove all effects with the given name. Returns count removed.

Return type:

int

is_effect_applied(effect_name)[source]

Check if a specific effect is applied to this clip.

Parameters:

effect_name (str | EffectName) – The effect name string or EffectName enum member.

Return type:

bool

Returns:

True if at least one effect with the given name exists on this clip.

property parameters: dict[str, Any]

Clip parameters dict.

property opacity: float

Clip opacity (0.0–1.0).

property volume: float

Audio volume (>= 0.0).

property is_silent: bool

Whether this clip has zero volume (gain == 0 or volume == 0).

property metadata: dict[str, Any]

Clip metadata dict.

set_metadata(metadata_key, metadata_value)[source]

Set a metadata value on this clip.

Return type:

Self

get_metadata(metadata_key, default=None)[source]

Get a metadata value from this clip.

Return type:

Any

clear_metadata()[source]

Remove all metadata from this clip.

Return type:

Self

Returns:

self for chaining.

property animation_tracks: dict[str, Any]

Animation tracks dict.

property visual_animations: list[dict[str, Any]]

Visual animation array from animationTracks.

property source_id: int | None

Source bin ID (src field), or None if absent.

set_source(source_id)[source]

Change the media source reference for this clip.

Return type:

Self

property source_effect: dict[str, Any] | None

Source effect applied to this clip, or None.

set_source_effect(*, color0=None, color1=None, color2=None, color3=None, mid_point=0.5, speed=5.0, source_file_type='tscshadervid')[source]

Create or replace the clip’s sourceEffect for shader backgrounds.

Colors are 0-255 RGB tuples. They’re converted to 0.0-1.0 internally.

Return type:

None

property start_seconds: float

Timeline position in seconds.

property duration_seconds: float

Playback duration in seconds.

is_shorter_than(threshold_seconds)[source]

Whether this clip’s duration is less than the given threshold.

Return type:

bool

set_start_seconds(start_seconds)[source]

Set the clip start position in seconds.

Parameters:

start_seconds (float) – New start position in seconds.

Return type:

Self

Returns:

Self for method chaining.

set_duration_seconds(duration_seconds)[source]

Set the clip duration in seconds.

Parameters:

duration_seconds (float) – New duration in seconds.

Return type:

Self

Returns:

Self for method chaining.

set_time_range(start_seconds, duration_seconds)[source]

Set both start position and duration in seconds.

Returns self for chaining.

Return type:

Self

copy_effects_from(source)[source]

Copy all effects from another clip.

Deep copies the source clip’s effects array into this clip. Existing effects on this clip are preserved (new effects appended).

Parameters:

source (BaseClip) – Clip to copy effects from.

Return type:

Self

Returns:

self for chaining.

duplicate_effects_to(target_clip)[source]

Copy all effects from this clip to another clip.

Convenience wrapper around copy_effects_from() that reads from self and writes to target_clip.

Parameters:

target_clip (BaseClip) – Clip that will receive this clip’s effects.

Return type:

Self

Returns:

self for chaining.

add_glow_timed(start_seconds, duration_seconds, radius=35.0, intensity=0.35, fade_in_seconds=0.4, fade_out_seconds=1.0)[source]

Add a time-bounded glow effect with fade-in/out.

Parameters:
  • start_seconds (float) – Effect start relative to clip, in seconds.

  • duration_seconds (float) – Effect duration in seconds.

  • radius (float) – Glow radius.

  • intensity (float) – Glow intensity.

  • fade_in_seconds (float) – Fade-in duration in seconds.

  • fade_out_seconds (float) – Fade-out duration in seconds.

Return type:

Glow

Returns:

The created Glow effect.

fade_in(duration_seconds)[source]

Add an opacity fade-in (0 → 1) over duration_seconds.

If a fade-out already exists, merges into a single unified animation.

Parameters:

duration_seconds (float) – Fade duration in seconds.

Return type:

Self

Returns:

self for chaining.

fade_out(duration_seconds)[source]

Add an opacity fade-out (1 → 0) ending at the clip’s end.

If a fade-in already exists, merges into a single unified animation.

Parameters:

duration_seconds (float) – Fade duration in seconds.

Return type:

Self

Returns:

self for chaining.

fade(fade_in_seconds=0.0, fade_out_seconds=0.0)[source]

Apply fade-in and/or fade-out, replacing existing opacity animations.

Uses the Camtasia v10 keyframe pattern: each keyframe specifies a target opacity value, and its duration defines the animation period.

Parameters:
  • fade_in_seconds (float) – Fade-in duration (0 to skip).

  • fade_out_seconds (float) – Fade-out duration (0 to skip).

Return type:

Self

Returns:

self for chaining.

set_opacity(opacity)[source]

Set a static opacity for the entire clip.

Parameters:

opacity (float) – Opacity value (0.0–1.0).

Return type:

Self

Returns:

self for chaining.

clear_animations()[source]

Remove all visual animation entries from the clip.

Return type:

Self

Returns:

self for chaining.

add_effect(effect_data)[source]

Append a raw effect dict to this clip’s effects list.

Parameters:

effect_data (dict[str, Any]) – A complete Camtasia effect dict.

Return type:

Effect

Returns:

Wrapped Effect instance.

add_drop_shadow(offset=5, blur=10, opacity=0.5, angle=5.5, color=(0, 0, 0), enabled=1)[source]

Add a drop-shadow effect.

Parameters:
  • offset (float) – Shadow offset distance.

  • blur (float) – Blur radius.

  • opacity (float) – Shadow opacity (0.0–1.0).

  • angle (float) – Shadow angle in degrees.

  • color (tuple[float, float, float]) – RGB colour tuple.

  • enabled (int) – Whether the shadow is enabled (1=on, 0=off).

Return type:

Effect

Returns:

Wrapped DropShadow effect.

add_glow(radius=35.0, intensity=0.35)[source]

Add a glow/bloom effect.

Parameters:
  • radius (float) – Glow radius.

  • intensity (float) – Glow intensity.

Return type:

Effect

Returns:

Wrapped Glow effect.

add_round_corners(radius=12.0)[source]

Add a rounded-corners effect.

Parameters:

radius (float) – Corner radius.

Return type:

Effect

Returns:

Wrapped RoundCorners effect.

add_color_adjustment(*, brightness=0.0, contrast=0.0, saturation=1.0, channel=0, shadow_ramp_start=0.0, shadow_ramp_end=0.0, highlight_ramp_start=1.0, highlight_ramp_end=1.0)[source]

Add a color adjustment effect.

Parameters:
  • brightness (float) – -1.0 to 1.0 (0 = no change).

  • contrast (float) – -1.0 to 1.0 (0 = no change).

  • saturation (float) – 0.0 to 3.0 (1.0 = no change).

  • channel (int) – Color channel (0 = all).

  • shadow_ramp_start (float) – Shadow ramp start (0.0-1.0).

  • shadow_ramp_end (float) – Shadow ramp end (0.0-1.0).

  • highlight_ramp_start (float) – Highlight ramp start (0.0-1.0).

  • highlight_ramp_end (float) – Highlight ramp end (0.0-1.0).

Return type:

Self

add_border(*, width=4.0, color=(1.0, 1.0, 1.0, 1.0), corner_radius=0.0)[source]

Add a border effect.

Parameters:
Return type:

Self

add_colorize(*, color=(0.5, 0.5, 0.5), intensity=0.5)[source]

Add a colorize/tint effect.

Parameters:
Return type:

Self

add_spotlight(*, brightness=0.5, concentration=0.5, opacity=0.35, color=(1.0, 1.0, 1.0, 0.35))[source]

Add a spotlight effect.

Return type:

Self

add_lut_effect(*, intensity=1.0, preset_name='')[source]

Add a color LUT (Look-Up Table) effect.

Parameters:
  • intensity (float) – Effect intensity 0.0-1.0.

  • preset_name (str) – Optional preset name for metadata.

Return type:

Self

add_media_matte(*, intensity=1.0, matte_mode=1, track_depth=10002, preset_name='Media Matte Luminasity')[source]

Add a media matte compositing effect.

Uses one track as a transparency mask for this clip.

Parameters:
  • intensity (float) – Effect intensity 0.0-1.0.

  • matte_mode (int) – Matte mode (1 = alpha, 2 = inverted alpha).

  • track_depth (int) – Track depth for matte source.

  • preset_name (str) – Preset name for metadata.

Return type:

Self

add_motion_blur(*, intensity=1.0)[source]

Add a motion blur effect.

Return type:

Self

add_emphasize(*, amount=0.5)[source]

Add an audio emphasis effect.

Parameters:

amount (float) – Emphasis amount 0.0-1.0.

Return type:

Self

add_blend_mode(*, mode=BlendMode.NORMAL, intensity=1.0)[source]

Add a blend mode compositing effect.

Parameters:
  • mode (int | BlendMode) – Blend mode (3=multiply, 16=normal, etc.).

  • intensity (float) – Effect intensity 0.0-1.0.

Return type:

Self

remove_effects()[source]

Remove all effects from this clip.

Return type:

Self

Returns:

self for chaining.

property translation: tuple[float, float]

(x, y) translation.

property scale: tuple[float, float]

(x, y) scale factors.

property rotation: float

Z-rotation in radians (stored as rotation1).

move_to(x, y)[source]

Set the clip’s canvas translation.

Return type:

Self

Returns:

self for chaining.

scale_to(factor)[source]

Set uniform scale on both axes.

Return type:

Self

Returns:

self for chaining.

scale_to_xy(x, y)[source]

Set non-uniform scale.

Return type:

Self

Returns:

self for chaining.

crop(left=0, top=0, right=0, bottom=0)[source]

Set geometry crop values (non-negative floats, pixel or fractional).

Return type:

Self

Returns:

self for chaining.

add_keyframe(parameter, time_seconds, value, duration_seconds=0.0, interp='eioe')[source]

Add a keyframe to a clip parameter.

Return type:

Self

Returns:

self for chaining.

summary()[source]

Human-readable clip summary.

Return type:

str

describe()[source]

Human-readable clip description.

Return type:

str

clone()[source]

Create a deep copy of this clip with a new ID.

Return type:

BaseClip

clear_keyframes(parameter=None)[source]

Remove keyframes from a parameter, or all parameters if parameter is None.

Return type:

Self

Returns:

self for chaining.

reset_transforms()[source]

Reset position, scale, and rotation to defaults.

Return type:

Self

remove_all_effects()[source]

Remove all effects from this clip.

Return type:

Self

set_opacity_fade(start_opacity=1.0, end_opacity=0.0, duration_seconds=None)[source]

Add an opacity fade keyframe animation.

Return type:

Self

set_position_keyframes(keyframes)[source]

Set position keyframes for animated movement.

Parameters:

keyframes (list[tuple[float, float, float]]) – List of (time_seconds, x, y) tuples.

Return type:

Self

set_scale_keyframes(keyframes)[source]

Set scale keyframes for animated scaling.

Parameters:

keyframes (list[tuple[float, float]]) – List of (time_seconds, scale) tuples.

Return type:

Self

set_rotation_keyframes(keyframes)[source]

Set rotation keyframes for animated rotation.

Parameters:

keyframes (list[tuple[float, float]]) – List of (time_seconds, rotation_degrees) tuples.

Return type:

Self

set_crop_keyframes(keyframes)[source]

Set crop keyframes for animated cropping.

Parameters:

keyframes (list[tuple[float, float, float, float, float]]) – List of (time_seconds, left, top, right, bottom) tuples. Values 0.0-1.0.

Return type:

Self

set_volume_fade(start_volume=1.0, end_volume=0.0, duration_seconds=None)[source]

Add a volume fade keyframe animation.

Return type:

Self

animate(*, fade_in=0.0, fade_out=0.0, scale_from=None, scale_to=None, move_from=None, move_to=None)[source]

Apply common animations in one call.

Parameters:
  • fade_in (float) – Fade-in duration in seconds (0 = no fade).

  • fade_out (float) – Fade-out duration in seconds (0 = no fade).

  • scale_from (float | None) – Starting scale (None = no scale animation).

  • scale_to (float | None) – Ending scale (None = no scale animation).

  • move_from (tuple[float, float] | None) – Starting (x, y) position (None = no movement).

  • move_to (tuple[float, float] | None) – Ending (x, y) position (None = no movement).

Return type:

Self

to_dict()[source]

Return a summary dict of this clip’s key properties.

Return type:

dict[str, Any]

property source_path: int | str

Source bin ID (int) or empty string if absent (from the ‘src’ field).

property media_start_seconds: float

Media start offset in seconds.

overlaps_with(other_clip)[source]

Check if this clip’s time range overlaps with another clip.

Return type:

bool

distance_to(other_clip)[source]

Gap in seconds between this clip and another (negative if overlapping).

Return type:

float

property has_keyframes: bool

Whether any parameter has keyframe animation.

clear_all_keyframes()[source]

Remove keyframes from ALL parameters, keeping default values.

Return type:

Self

copy_timing_from(source_clip)[source]

Copy start time and duration from another clip.

Return type:

Self

matches_type(clip_type)[source]

Check if this clip matches the given type.

Return type:

bool

matches_any_type(*clip_types)[source]

Check if this clip matches any of the given types.

Return type:

bool

snap_to_seconds(target_start_seconds)[source]

Move this clip to start at the given time in seconds.

Return type:

Self

is_longer_than(threshold_seconds)[source]

Whether this clip’s duration exceeds the given threshold.

Return type:

bool

apply_if(predicate, operation)[source]

Apply an operation only if the predicate is true for this clip.

Return type:

Self

copy_to_track(target_track)[source]

Copy this clip to another track, preserving timing and effects.

Creates a deep copy of the clip data, assigns a new ID from the target track, and appends it to the target track’s media list.

Parameters:

target_track (Track) – The track to copy this clip into.

Return type:

BaseClip

Returns:

The newly created clip on the target track.

Audio media clip (AMFile).

class camtasia.timeline.clips.audio.AMFile(data)[source]

Bases: BaseClip

Audio media file clip.

Wraps an AMFile JSON dict. Adds audio-specific properties for channel selection, gain, and loudness normalization.

Parameters:

data (dict[str, Any]) – The raw clip dict.

property channel_number: str

Channel number string (e.g. '0', '0,1').

property attributes: dict[str, Any]

Audio attributes dict (ident, gain, mixToMono, etc.).

property gain: float

Audio gain multiplier.

property loudness_normalization: bool

Whether loudness normalization is enabled.

property is_muted: bool

Whether the clip’s gain is zero.

normalize_gain(target_db=-23.0)[source]

Set loudness normalization target.

Camtasia uses LUFS for loudness normalization. Common targets: -23 LUFS (EBU R128), -16 LUFS (podcast).

Parameters:

target_db (float) – Target loudness in LUFS (default -23.0).

Return type:

Self

Returns:

self for chaining.

set_gain(gain)[source]

Set the audio gain (volume multiplier).

Parameters:

gain (float) – Volume multiplier (0.0 = silent, 1.0 = normal, 2.0 = double).

Return type:

Self

Returns:

self for chaining.

Video media clip (VMFile).

class camtasia.timeline.clips.video.VMFile(data)[source]

Bases: BaseClip

Video media file clip.

Minimal wrapper — video clips use mostly BaseClip properties.

Parameters:

data (dict[str, Any]) – The raw clip dict.

Image media clip (IMFile).

class camtasia.timeline.clips.image.IMFile(data)[source]

Bases: BaseClip

Image media file clip.

Inherits translation, scale, crop, and other transform helpers from BaseClip. Adds a read-only geometry_crop convenience property.

Parameters:

data (dict[str, Any]) – The raw clip dict.

property geometry_crop: dict[str, float]

Geometry crop values (keys 0 through 3).

Screen recording clips (ScreenVMFile, ScreenIMFile).

class camtasia.timeline.clips.screen_recording.ScreenVMFile(data)[source]

Bases: BaseClip

Screen recording video clip.

Inherits translation, scale, and other transform helpers from BaseClip. Adds cursor effect properties.

Parameters:

data (dict[str, Any]) – The raw clip dict.

property cursor_scale: float

Cursor enlargement factor.

property cursor_opacity: float

Cursor opacity (0.0–1.0).

property cursor_track_level: float

Cursor track level.

property smooth_cursor_across_edit_duration: float

Smooth cursor across edit duration setting.

property cursor_motion_blur_intensity: float

CursorMotionBlur intensity.

property cursor_shadow: dict[str, float]

CursorShadow parameters.

property cursor_physics: dict[str, float]

CursorPhysics parameters (intensity, tilt).

property left_click_scaling: dict[str, float]

LeftClickScaling parameters (scale, speed).

class camtasia.timeline.clips.screen_recording.ScreenIMFile(data)[source]

Bases: BaseClip

Screen recording cursor overlay clip.

Contains per-frame cursor position keyframes.

Parameters:

data (dict[str, Any]) – The raw clip dict.

property cursor_image_path: str | None

Cursor image path identifier.

property cursor_location_keyframes: list[dict[str, Any]]

Cursor location keyframes.

Returns:

List of dicts with time, endTime, value, duration keys. value is [x, y, z].

Stitched (spliced) media clip.

class camtasia.timeline.clips.stitched.StitchedMedia(data)[source]

Bases: BaseClip

Container for multiple spliced segments from the same source.

The parent mediaStart/duration defines a window into the child timeline formed by the medias array.

Parameters:

data (dict[str, Any]) – The raw clip dict.

property nested_clips: list[BaseClip]

Child clip segments.

Returns:

List of typed clip instances created via clip_from_dict.

property attributes: dict[str, Any]

Clip attributes dict.

property segment_count: int

Number of nested clip segments.

property min_media_start: int

Minimum media start offset in frames.

clear_segments()[source]

Remove all nested segments.

Return type:

None

Group (compound) clip.

class camtasia.timeline.clips.group.GroupTrack(data)[source]

Bases: object

A track inside a Group clip.

Parameters:

data (dict[str, Any]) – The raw track dict from the Group’s tracks array.

property track_index: int

Track index within the group.

property clips: list[BaseClip]

Clips on this group track.

Returns:

List of typed clip instances created via clip_from_dict.

property parameters: dict[str, Any]

Track parameters dict.

property transitions: TransitionList

Transitions on this internal track.

add_clip(clip_type, source_id, start_ticks, duration_ticks, *, next_id=None, **extra_fields)[source]

Add a clip to this internal group track.

Parameters:
  • clip_type (str) – The _type value (e.g. 'AMFile', 'VMFile').

  • source_id (int | None) – Source bin ID, or None for callouts/groups.

  • start_ticks (int) – Timeline position in ticks (group-relative).

  • duration_ticks (int) – Playback duration in ticks.

  • next_id (int | None) – Explicit clip ID to use. Pass project.next_available_id for global uniqueness. If None, uses local max+1 (unique within this track only).

  • **extra_fields (Any) – Additional fields merged into the clip dict.

Return type:

BaseClip

Returns:

The newly created typed clip object.

class camtasia.timeline.clips.group.Group(data)[source]

Bases: BaseClip

Compound clip containing its own internal tracks.

Parameters:

data (dict[str, Any]) – The raw clip dict.

property tracks: list[GroupTrack]

Internal tracks, each with their own clips.

property clip_count: int

Total number of clips across all internal tracks.

add_internal_track()[source]

Add a new empty internal track to this Group.

Return type:

GroupTrack

Returns:

The newly created GroupTrack.

ungroup()[source]

Extract all internal clips as a flat list.

Returns the clips with their start times adjusted to be relative to the Group’s position on the timeline. Internal clip data is deep-copied so the Group’s own state is never mutated.

Return type:

list[BaseClip]

Returns:

List of clips with timeline-absolute start positions.

property attributes: dict[str, Any]

Group attributes dict (ident, widthAttr, heightAttr).

property ident: str

Group name / identifier.

property width: float

Group width.

property height: float

Group height.

property is_screen_recording: bool

Return True if this group contains screen recording media.

property internal_media_src: int | None

Return the source ID of the internal screen recording media, or None.

find_internal_clip(clip_type)[source]

Find the first internal clip matching the given type string.

Return type:

BaseClip | None

property all_internal_clips: list[BaseClip]

All clips across all internal tracks (flat list).

property internal_clip_types: set[str]

Set of unique clip types across all internal tracks.

property has_audio: bool

Whether any internal clip is an audio clip.

property has_video: bool

Whether any internal clip is a video clip.

property internal_duration_seconds: float

Duration of the longest internal track in seconds.

find_internal_clips_by_type(clip_type)[source]

Find all internal clips of a specific type.

Parameters:

clip_type (str | ClipType) – Clip type string or ClipType enum value.

Return type:

list[BaseClip]

Returns:

List of matching clips across all internal tracks.

remove_internal_clip(clip_id)[source]

Remove a clip from any internal track by ID.

Cascade-deletes any transitions referencing the removed clip.

Parameters:

clip_id (int) – The id of the internal clip to remove.

Raises:

KeyError – If no internal clip with the given ID exists.

Return type:

None

clear_all_internal_clips()[source]

Remove all clips from all internal tracks.

Cascade-deletes all transitions on every internal track.

Return type:

int

Returns:

The total number of clips removed.

set_dimensions(width_pixels, height_pixels)[source]

Set the Group’s width and height attributes.

Parameters:
  • width_pixels (float) – New width value.

  • height_pixels (float) – New height value.

Return type:

Self

Returns:

self for fluent chaining.

rename(new_name)[source]

Rename this Group.

Parameters:

new_name (str) – The new identifier for this Group.

Return type:

Self

Returns:

self for fluent chaining.

merge_internal_tracks()[source]

Merge all internal tracks into a single track.

Moves every clip from tracks[1:] into tracks[0], then removes the extra tracks. If the group has no tracks, a new empty one is created.

Return type:

GroupTrack

Returns:

The surviving (first) GroupTrack containing all clips.

describe()[source]

Human-readable Group description.

Return type:

str

set_internal_segment_speeds(segments, *, next_id=None, canvas_width=None, canvas_height=None)[source]

Replace the internal track’s media with per-segment StitchedMedia clips.

Each segment maps a slice of the source recording to a timeline duration, allowing different playback speeds per segment.

Uses the Camtasia StitchedMedia format reverse-engineered from v2 projects: each StitchedMedia clip on the Group’s internal track has its own scalar, mediaStart, and nested ScreenVMFile + ScreenIMFile children.

Parameters:
  • segments (list[tuple[float, float, float]]) – List of (source_start_s, source_end_s, timeline_duration_s) tuples.

  • next_id (int | None) – Starting ID for generated clips. If None, auto-detects from existing internal clip IDs.

  • canvas_width (float | None) – Optional width to set on each created ScreenVMFile clip. When provided, overrides the source recording’s native width so the clip fits the project canvas (e.g. 1920 for a Retina recording).

  • canvas_height (float | None) – Optional height to set on each created ScreenVMFile clip.

Return type:

None

Callout (text overlay) clip.

class camtasia.timeline.clips.callout.CalloutBuilder(text)[source]

Bases: object

Fluent builder for creating styled Callout clips.

Usage:

builder = CalloutBuilder(‘Hello World’) builder.font(‘Montserrat’, weight=700, size=48) builder.color(fill=(0, 0, 0, 255), font=(255, 255, 255, 255)) builder.position(100, 200) builder.size(400, 100) # Then pass builder to track.add_callout_from_builder()

font(name='Montserrat', *, weight=400, size=36.0)[source]

Set font properties.

Return type:

CalloutBuilder

color(*, fill=None, font=None, stroke=None)[source]

Set colors as RGBA 0-255 tuples.

Return type:

CalloutBuilder

position(x, y)[source]

Set canvas position.

Return type:

CalloutBuilder

size(width, height)[source]

Set dimensions.

Return type:

CalloutBuilder

alignment(align)[source]

Set horizontal alignment (‘left’, ‘center’, ‘right’).

Return type:

CalloutBuilder

class camtasia.timeline.clips.callout.Callout(data)[source]

Bases: BaseClip

Text overlay / annotation clip.

The callout definition lives in the def key of the clip dict.

Parameters:

data (dict[str, Any]) – The raw clip dict.

property definition: dict[str, Any]

The full callout def dict.

property text: str

Callout text content.

property font: dict[str, Any]

Font definition dict.

property kind: str

Callout kind (e.g. 'remix').

property shape: str

Callout shape (e.g. 'text').

property style: str

Callout style (e.g. 'basic').

property width: float

Callout width.

property height: float

Callout height.

property horizontal_alignment: str

Horizontal text alignment (e.g. 'center').

property fill_color: tuple[float, float, float, float]

Fill color as (r, g, b, opacity).

property stroke_color: tuple[float, float, float, float]

Stroke color as (r, g, b, opacity).

property corner_radius: float

Corner radius for rounded shapes.

property tail_position: tuple[float, float]

Tail position as (x, y).

set_font(name, weight='Regular', size=64.0)[source]

Update the callout’s font properties.

Parameters:
  • name (str) – Font family name (e.g. 'Arial').

  • weight (str) – Font weight (e.g. 'Regular', 'Bold').

  • size (float) – Font size in points.

Return type:

Self

Returns:

Self for chaining.

set_colors(fill=None, stroke=None, font_color=None)[source]

Set fill, stroke, and/or font RGBA colors.

Parameters:
Return type:

Self

Returns:

Self for chaining.

resize(width, height)[source]

Set callout dimensions.

Parameters:
  • width (float) – New width.

  • height (float) – New height.

Return type:

Self

Returns:

Self for chaining.

position(x, y)[source]

Set the callout position.

Deprecated since version Use: move_to() instead (inherited from BaseClip).

Return type:

Self

set_alignment(horizontal, vertical)[source]

Set text alignment.

Parameters:
  • horizontal (str) – Horizontal alignment (e.g. 'center', 'left').

  • vertical (str) – Vertical alignment (e.g. 'center', 'top').

Return type:

Self

Returns:

Self for chaining.

set_size(width, height)[source]

Set callout dimensions and enable text resizing.

Parameters:
  • width (float) – Callout width.

  • height (float) – Callout height.

Return type:

Self

Returns:

Self for chaining.

add_behavior(preset=BehaviorPreset.REVEAL)[source]

Add a text behavior animation effect.

Parameters:

preset (str | BehaviorPreset) – Behavior preset name ('Reveal', 'Sliding').

Return type:

Self

Returns:

Self for chaining.

class camtasia.timeline.clips.placeholder.PlaceholderMedia(data)[source]

Bases: BaseClip

A placeholder clip for missing or to-be-added media.

property subtitle: str

Subtitle text for the placeholder clip.

property width: float

Width of the placeholder in pixels.

property height: float

Height of the placeholder in pixels.

UnifiedMedia clip type for bundled video and audio.

class camtasia.timeline.clips.unified.UnifiedMedia(data)[source]

Bases: BaseClip

A clip bundling video and audio from the same source (e.g., Camtasia Rev).

Contains a video child and an audio child, both referencing the same .trec source file. The video child is either a ScreenVMFile (screen recording) or a VMFile (camera recording).

property video: BaseClip

The video child clip (ScreenVMFile or VMFile).

property audio: BaseClip

The audio child clip (AMFile).

property has_audio: bool

Whether this unified media contains an audio track.

property is_screen_recording: bool

Whether the video child is a screen recording (vs camera).

property is_camera: bool

Whether the video child is a camera recording.

property source_id: int | None

Source bin ID from the video child.

mute_audio()[source]

Set audio gain to zero.

Return type:

Self