通过AVAssetWriter和AVAssetWriterInputs编写video+audio的代码不起作用。 为什么?

我一直在尝试使用AVAssetWriter和AVAssetWriterInputs编写video+audio。

我在这个论坛上看到多个post说他们能够完成这个任务,但是这对我来说并不合适。 如果我只是写video,那么代码就很好。 当我添加audio输出文件已损坏,无法复制。

这是我的代码的一部分:

设置AVCaptureVideoDataOutput和AVCaptureAudioDataOutput:

NSError *error = nil; // Setup the video input AVCaptureDevice *videoDevice = [AVCaptureDevice defaultDeviceWithMediaType: AVMediaTypeVideo]; // Create a device input with the device and add it to the session. AVCaptureDeviceInput *videoInput = [AVCaptureDeviceInput deviceInputWithDevice:videoDevice error:&error]; // Setup the video output _videoOutput = [[AVCaptureVideoDataOutput alloc] init]; _videoOutput.alwaysDiscardsLateVideoFrames = NO; _videoOutput.videoSettings = [NSDictionary dictionaryWithObject: [NSNumber numberWithInt:kCVPixelFormatType_32BGRA] forKey:(id)kCVPixelBufferPixelFormatTypeKey]; // Setup the audio input AVCaptureDevice *audioDevice = [AVCaptureDevice defaultDeviceWithMediaType: AVMediaTypeAudio]; AVCaptureDeviceInput *audioInput = [AVCaptureDeviceInput deviceInputWithDevice:audioDevice error:&error ]; // Setup the audio output _audioOutput = [[AVCaptureAudioDataOutput alloc] init]; // Create the session _capSession = [[AVCaptureSession alloc] init]; [_capSession addInput:videoInput]; [_capSession addInput:audioInput]; [_capSession addOutput:_videoOutput]; [_capSession addOutput:_audioOutput]; _capSession.sessionPreset = AVCaptureSessionPresetLow; // Setup the queue dispatch_queue_t queue = dispatch_queue_create("MyQueue", NULL); [_videoOutput setSampleBufferDelegate:self queue:queue]; [_audioOutput setSampleBufferDelegate:self queue:queue]; dispatch_release(queue); 

设置AVAssetWriter并将audio和videoAVAssetWriterInputs关联到它:

 - (BOOL)setupWriter { NSError *error = nil; _videoWriter = [[AVAssetWriter alloc] initWithURL:videoURL fileType:AVFileTypeQuickTimeMovie error:&error]; NSParameterAssert(_videoWriter); // Add video input NSDictionary *videoCompressionProps = [NSDictionary dictionaryWithObjectsAndKeys: [NSNumber numberWithDouble:128.0*1024.0], AVVideoAverageBitRateKey, nil ]; NSDictionary *videoSettings = [NSDictionary dictionaryWithObjectsAndKeys: AVVideoCodecH264, AVVideoCodecKey, [NSNumber numberWithInt:192], AVVideoWidthKey, [NSNumber numberWithInt:144], AVVideoHeightKey, videoCompressionProps, AVVideoCompressionPropertiesKey, nil]; _videoWriterInput = [[AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo outputSettings:videoSettings] retain]; NSParameterAssert(_videoWriterInput); _videoWriterInput.expectsMediaDataInRealTime = YES; // Add the audio input AudioChannelLayout acl; bzero( &acl, sizeof(acl)); acl.mChannelLayoutTag = kAudioChannelLayoutTag_Mono; NSDictionary* audioOutputSettings = nil; // Both type of audio inputs causes output video file to be corrupted. if (NO) { // should work from iphone 3GS on and from ipod 3rd generation audioOutputSettings = [NSDictionary dictionaryWithObjectsAndKeys: [ NSNumber numberWithInt: kAudioFormatMPEG4AAC ], AVFormatIDKey, [ NSNumber numberWithInt: 1 ], AVNumberOfChannelsKey, [ NSNumber numberWithFloat: 44100.0 ], AVSampleRateKey, [ NSNumber numberWithInt: 64000 ], AVEncoderBitRateKey, [ NSData dataWithBytes: &acl length: sizeof( acl ) ], AVChannelLayoutKey, nil]; } else { // should work on any device requires more space audioOutputSettings = [ NSDictionary dictionaryWithObjectsAndKeys: [ NSNumber numberWithInt: kAudioFormatAppleLossless ], AVFormatIDKey, [ NSNumber numberWithInt: 16 ], AVEncoderBitDepthHintKey, [ NSNumber numberWithFloat: 44100.0 ], AVSampleRateKey, [ NSNumber numberWithInt: 1 ], AVNumberOfChannelsKey, [ NSData dataWithBytes: &acl length: sizeof( acl ) ], AVChannelLayoutKey, nil ]; } _audioWriterInput = [[AVAssetWriterInput assetWriterInputWithMediaType: AVMediaTypeAudio outputSettings: audioOutputSettings ] retain]; _audioWriterInput.expectsMediaDataInRealTime = YES; // add input [_videoWriter addInput:_videoWriterInput]; [_videoWriter addInput:_audioWriterInput]; return YES; } 

这里是开始/停止video录制的function

 - (void)startVideoRecording { if (!_isRecording) { NSLog(@"start video recording..."); if (![self setupWriter]) { return; } _isRecording = YES; } } - (void)stopVideoRecording { if (_isRecording) { _isRecording = NO; [_videoWriterInput markAsFinished]; [_videoWriter endSessionAtSourceTime:lastSampleTime]; [_videoWriter finishWriting]; NSLog(@"video recording stopped"); } } 

最后是CaptureOutput代码

 - (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection { if (!CMSampleBufferDataIsReady(sampleBuffer)) { NSLog( @"sample buffer is not ready. Skipping sample" ); return; } if (_isRecording == YES) { lastSampleTime = CMSampleBufferGetPresentationTimeStamp(sampleBuffer); if (_videoWriter.status != AVAssetWriterStatusWriting ) { [_videoWriter startWriting]; [_videoWriter startSessionAtSourceTime:lastSampleTime]; } if (captureOutput == _videoOutput) { [self newVideoSample:sampleBuffer]; } /* // If I add audio to the video, then the output file gets corrupted and it cannot be reproduced } else { [self newAudioSample:sampleBuffer]; } */ } } - (void)newVideoSample:(CMSampleBufferRef)sampleBuffer { if (_isRecording) { if (_videoWriter.status > AVAssetWriterStatusWriting) { NSLog(@"Warning: writer status is %d", _videoWriter.status); if (_videoWriter.status == AVAssetWriterStatusFailed) NSLog(@"Error: %@", _videoWriter.error); return; } if (![_videoWriterInput appendSampleBuffer:sampleBuffer]) { NSLog(@"Unable to write to video input"); } } } - (void)newAudioSample:(CMSampleBufferRef)sampleBuffer { if (_isRecording) { if (_videoWriter.status > AVAssetWriterStatusWriting) { NSLog(@"Warning: writer status is %d", _videoWriter.status); if (_videoWriter.status == AVAssetWriterStatusFailed) NSLog(@"Error: %@", _videoWriter.error); return; } if (![_audioWriterInput appendSampleBuffer:sampleBuffer]) { NSLog(@"Unable to write to audio input"); } } } 

如果有人能够find这个代码中的问题,我会很高兴。

在startVideoRecording中,我打电话(我假设你正在调用这个)

 [_capSession startRunning] ; 

在stopVideoRecording我不打电话

 [_videoWriterInput markAsFinished]; [_videoWriter endSessionAtSourceTime:lastSampleTime]; 

markAsFinished更适用于块式拉式方法。 有关说明,请参阅AVAssetWriterInput中的requestMediaDataWhenReadyOnQueue:usingBlock。 库应该计算交错缓冲区的正确时间。

你不需要调用endSessionAtSrouceTime。 样本数据中的最后一个时间戳将在呼叫之后使用

 [_videoWriter finishWriting]; 

我也明确检查捕获输出的types。

 else if( captureOutput == _audioOutput) { [self newAudioSample:sampleBuffer]; } 

这是我的。 audio和video通过我。 有可能我改变了一些东西。 如果这不适合你,那么我会张贴我拥有的一切。

 -(void) startVideoRecording { if( !_isRecording ) { NSLog(@"start video recording..."); if( ![self setupWriter] ) { NSLog(@"Setup Writer Failed") ; return; } [_capSession startRunning] ; _isRecording = YES; } } -(void) stopVideoRecording { if( _isRecording ) { _isRecording = NO; [_capSession stopRunning] ; if(![_videoWriter finishWriting]) { NSLog(@"finishWriting returned NO") ; } //[_videoWriter endSessionAtSourceTime:lastSampleTime]; //[_videoWriterInput markAsFinished]; //[_audioWriterInput markAsFinished]; NSLog(@"video recording stopped"); } } 

首先,不要使用[NSNumber numberWithInt:kCVPixelFormatType_32BGRA],因为它不是相机的本机格式。 使用[NSNumber numberWithInt:kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange]

另外,您应该在调用startWriting之前始终检查它尚未运行。 您不需要设置会话结束时间,因为stopWriting会这样做。