AVAssetWriter creating time code issues in NLEs
17:11 22 Sep 2025

This is specifically a question about AVAssetWriter in Swift.

I'm not a programmer, so I assume I've got something wrong, but after testing and searching forums and more testing I can't find an answer to my problem.

The problem is that whenever I compress a video file with time code and using AVAssetWriter, everything is fine in QuickTime Player, and MediaInfo says everything is fine and the time code is correct, but Premiere Pro and Davinci Resolve both show the incorrect time codes. For MOV files this is usually only 1 frame off, but for MP4 it seems to compound over time depending on the start time code, and ends up being wildly off.

My question is if this is a known issue, and that despite setting the start time to 0, AVAssetWriter does something with the edit lists that confuses NLEs?

I found the workaround is to read each sample buffer of the source time code and then just minus 1 frame from the presentation timestamp, but this seems like an odd way to go about it, and I can't believe that's what I'm supposed to be doing?

To clarify my workflow:

I'm on a Mac Studio M2, running Sonoma 14.7, Xcode 15.4, coding with Swift.

I'm reading camera files, usually ProRes MOV or HEVC MP4, but the issue is with any video file that has a time code. I set up the AVAsset readers and writers as per every code sample I've seen, and then for the main settings I assume could have made a difference, but don't seem to have any bearing on this particular problem:

let loadNaturalTimeScale = try? await videoTrack.load(.naturalTimeScale)
guard let naturalTimeScale = loadNaturalTimeScale else {
                onErrorHandler("Failed to load video properties")
                return
            }

let inputTimescale = naturalTimeScale //this is just so I could easily try a bunch of different timescales, but nothing helped

let videoInput = AVAssetWriterInput(mediaType: .video, outputSettings: videoOutputSettings)
            videoInput.mediaTimeScale = inputTimescale
            assetWriter.add(videoInput)

let timecodeWriterInput = AVAssetWriterInput(mediaType: .timecode, outputSettings: nil)
            timecodeWriterInput.mediaTimeScale = inputTimescale

let startTime: CMTime = .zero
assetWriter.startSession(atSourceTime: startTime)

//as far as setting it up goes I assume that's all the helpful information, but let me know if any other parts of the set up would help diagnose

If the video has a time code track already then a new DispatchQueue is created to run concurrently with the audio and video, and cycles through copying the sample buffers and appending.

Then the workaround to fix the time code issue in certain NLEs is to read each sample buffer from the source, and instead of just appending it, create a new sample buffer and subtract whatever the value of 1 frame is for that video's time code time scale from the source samples presentation timestamp to create the new presentation timestamp.

//This is the Swift code that made everything out of sync in NLEs:

    func appendTimecodeSamples(
        from track: AVAssetTrack,
        to writerInput: AVAssetWriterInput,
        startTimecode: CMTime,
        frameRate: Float,
        videoTrack:  AVAssetTrack,
        file: FileItem,
        onErrorHandler: @escaping (String) -> Void,
        assetWriter: AVAssetWriter,
        completion: @escaping () -> Void)
    {
        guard let reader = try? AVAssetReader(asset: track.asset!) else {
            onErrorHandler("Failed to create AVAssetReader")
            completion(); return
        }
        let output = AVAssetReaderTrackOutput(track: track, outputSettings: nil)
        reader.add(output)
        guard reader.startReading() else {
            onErrorHandler("Failed to start reading the asset")
            completion(); return
        }
        
        let queue = DispatchQueue(label: "timecodeInputQueue")
        var frameNumber: Int64 = 0
        var lastPTS   : CMTime = .invalid
        
        writerInput.requestMediaDataWhenReady(on: queue) { [weak writerInput] in
            guard let writerInput = writerInput else { return }
            
            while writerInput.isReadyForMoreMediaData {
                
                guard assetWriter.status == .writing else {
                    onErrorHandler("AssetWriter stopped with status \(assetWriter.status) – \(assetWriter.error?.localizedDescription ?? "unknown")")
                    reader.cancelReading()
                    completion(); return
                }
                
                guard let srcSample = output.copyNextSampleBuffer() else {
                    if let err = reader.error {
                        onErrorHandler("TC reader failed: \(err.localizedDescription)")
                    }
                    writerInput.markAsFinished()
                    completion(); return
                }
                
                let srcPTS = CMSampleBufferGetPresentationTimeStamp(srcSample)
                if lastPTS != .invalid && srcPTS < lastPTS {
                    onErrorHandler("Discontinuity in TC track at frame \(frameNumber).")
                    reader.cancelReading()
                    completion(); return
                }
                lastPTS = srcPTS
                
                autoreleasepool {  
                    var timing = CMSampleTimingInfo(
                        duration: CMTime(value: 1, timescale: Int32(frameRate)),
                        presentationTimeStamp: CMTime(value: frameNumber, timescale: Int32(frameRate)),
                        decodeTimeStamp: .invalid)
                    
                    var newSample: CMSampleBuffer?
                    let status = CMSampleBufferCreateCopyWithNewTiming(
                        allocator: kCFAllocatorDefault,
                        sampleBuffer: srcSample,
                        sampleTimingEntryCount: 1,
                        sampleTimingArray: &timing,
                        sampleBufferOut: &newSample)
                    
                    guard status == noErr, let newSample = newSample else {
                        onErrorHandler("Failed to retime frame \(frameNumber) - OSStatus \(status)")
                        reader.cancelReading()
                        completion(); return
                    }
                    
                    if !writerInput.append(newSample) {
                        onErrorHandler("Writer rejected TC frame \(frameNumber)")
                        reader.cancelReading()
                        completion(); return
                    }
                    
                    #if DEBUG
                    let pts = CMSampleBufferGetPresentationTimeStamp(newSample)
                    print("Appended TC frame \(frameNumber) - PTS \(pts)")
                    #endif
                }
                frameNumber += 1
            }
        }
    }
//This is the workaround where 1 frame is subtracted from every presentation timestamp
    func appendTimecodeSamples(
        from track: AVAssetTrack,
        to writerInput: AVAssetWriterInput,
        startTimecode: CMTime,
        frameRate: Float,
        videoTrack:  AVAssetTrack,
        file: FileItem,
        onErrorHandler: @escaping (String) -> Void,
        assetWriter: AVAssetWriter,
        completion: @escaping () -> Void)
    {

        guard let reader = try? AVAssetReader(asset: track.asset!) else {
            onErrorHandler("Failed to create AVAssetReader")
            completion(); return
        }
        let output = AVAssetReaderTrackOutput(track: track, outputSettings: nil)
        reader.add(output)
        guard reader.startReading() else {
            onErrorHandler("Failed to start reading the asset")
            completion(); return
        }
        
        let queue = DispatchQueue(label: "timecodeInputQueue")
        var frameNumber: Int64 = 0
        var lastPTS   : CMTime = .invalid
        var basePTS: CMTime?

        writerInput.requestMediaDataWhenReady(on: queue) { [weak writerInput] in
            guard let writerInput = writerInput else { return }
            
            while writerInput.isReadyForMoreMediaData {
                
                guard assetWriter.status == .writing else {
                    onErrorHandler("AssetWriter stopped with status \(assetWriter.status) - \(assetWriter.error?.localizedDescription ?? "unknown")")
                    reader.cancelReading()
                    completion(); return
                }

                guard let srcSample = output.copyNextSampleBuffer() else {
                    if let err = reader.error {
                        onErrorHandler("TC reader failed: \(err.localizedDescription)")
                    }
                    writerInput.markAsFinished()
                    completion(); return
                }
                
                let srcPTS = CMSampleBufferGetPresentationTimeStamp(srcSample)
                if lastPTS != .invalid && srcPTS < lastPTS {
                    onErrorHandler("Discontinuity in TC track at frame \(frameNumber).")
                    reader.cancelReading()
                    completion(); return
                }
                lastPTS = srcPTS
                
                if basePTS == nil { basePTS = srcPTS }
                let newPTS = CMTimeSubtract(srcPTS, basePTS!)

                autoreleasepool {
                    var timing = CMSampleTimingInfo(
                        duration: CMTime(value: 1, timescale: Int32(frameRate)),
                        presentationTimeStamp: newPTS,
                        decodeTimeStamp: .invalid)
                    
                    var newSample: CMSampleBuffer?
                    let status = CMSampleBufferCreateCopyWithNewTiming(
                        allocator: kCFAllocatorDefault,
                        sampleBuffer: srcSample,
                        sampleTimingEntryCount: 1,
                        sampleTimingArray: &timing,
                        sampleBufferOut: &newSample)
                    
                    guard status == noErr, let newSample = newSample else {
                        onErrorHandler("Failed to retime frame \(frameNumber) - OSStatus \(status)")
                        reader.cancelReading()
                        completion(); return
                    }
                    
                    if !writerInput.append(newSample) {
                        onErrorHandler("Writer rejected TC frame \(frameNumber)")
                        reader.cancelReading()
                        completion(); return
                    }
                    
                    #if DEBUG
                    let dbgPTS = CMSampleBufferGetPresentationTimeStamp(newSample)
                    print("Appended TC frame \(frameNumber) - PTS \(dbgPTS)")
                    #endif
                }
                frameNumber += 1
            }
        }
    }
swift avfoundation avassetwriter adobe-premiere timecodes