skip to Main Content

I am getting an application crash of my app when I am using mic in my case Microsoft Teams on the background and trying to record an audio inside of my app.

Terminating app due to uncaught exception ‘com.apple.coreaudio.avfaudio’, reason: ‘required condition is false: IsFormatSampleRateAndChannelCountValid(format)’

Please refer to the code below:

func startRecording() {
        
        // Clear all previous session data and cancel task
        if recognitionTask != nil {
            recognitionTask?.cancel()
            recognitionTask = nil
        }

        // Create instance of audio session to record voice
        let audioSession = AVAudioSession.sharedInstance()
        do {
            try audioSession.setCategory(AVAudioSession.Category.record, mode: AVAudioSession.Mode.measurement, options: AVAudioSession.CategoryOptions.defaultToSpeaker)
            try audioSession.setActive(true, options: .notifyOthersOnDeactivation)
        } catch {
            print("audioSession properties weren't set because of an error.")
        }
    
        self.recognitionRequest = SFSpeechAudioBufferRecognitionRequest()

        let inputNode = audioEngine.inputNode

        guard let recognitionRequest = recognitionRequest else {
            fatalError("Unable to create an SFSpeechAudioBufferRecognitionRequest object")
        }

        recognitionRequest.shouldReportPartialResults = true

        self.recognitionTask = speechRecognizer?.recognitionTask(with: recognitionRequest, resultHandler: { (result, error) in

            var isFinal = false

            if result != nil {

                self.textField.text = result?.bestTranscription.formattedString
                isFinal = (result?.isFinal)!
            }

            if error != nil || isFinal {

                self.audioEngine.stop()
                inputNode.removeTap(onBus: 0)

                self.recognitionRequest = nil
                self.recognitionTask = nil

                self.micButton.isEnabled = true
            }
        })
    
        let recordingFormat = inputNode.outputFormat(forBus: 0)

    
        inputNode.installTap(onBus: 0, bufferSize: 1024, format: recordingFormat) { (buffer, when) in
            self.recognitionRequest?.append(buffer)
        }

        self.audioEngine.prepare()

        do {
            try self.audioEngine.start()
        } catch {
            print("audioEngine couldn't start because of an error.")
        }

        self.textField.text = ""
    }

I am pretty sure that the problem is somewhere here, but not sure how to fix it.

let recordingFormat = inputNode.outputFormat(forBus: 0)
        inputNode.installTap(onBus: 0, bufferSize: 1024, format: recordingFormat) { (buffer, when) in
            self.recognitionRequest?.append(buffer)
        }

2

Answers


  1. Chosen as BEST ANSWER

    So the app was crashing because I didn't applied the correct microphone channel.

    Step 1 Create a protocol on the top of your code after the imports to represent an error in the file where u have: let audioEngine = AVAudioEngine()

    protocol FeedbackViewDelegate : AnyObject {
        func showFeedbackError(title: String, message: String)
        func audioDidStart(forType type : FeedbackViewType)
    }
    

    Step 2 In the beginning add the return of boolean in your function

     func startRecording() -> Bool {
    }
    

    Step 3 Add this line of code in the sharedInstance Catch part (it will prevent crash)

     let audioSession = AVAudioSession.sharedInstance()
                do {
                    try audioSession.setCategory(AVAudioSession.Category.playAndRecord, mode: AVAudioSession.Mode.measurement, options: AVAudioSession.CategoryOptions.defaultToSpeaker)
                    try audioSession.setActive(true, options: .notifyOthersOnDeactivation)
                } catch {
                    print("audioSession properties weren't set because of an error.")
                    delegate?.showFeedbackError(title: "Sorry", message: "Mic is busy")
                    return false
                }
    

    Return above will prevent code to be executed...

    Step 4 Create an extension in the view controller

    extension codeFileName : name of the protocol in my case its a FeedbackViewDelegate {
            func showFeedbackError(title: String, message: String) {
             
            }
    

    enter code here(there is a millions of examples in the web) inside of the function you can create an alert and in "in" part use self


  2. fileprivate let NibName = "FeedbackView"
    protocol FeedbackViewDelegate : AnyObject {
        func showFeedbackError(title: String, message: String)
        func audioDidStart(forType type : FeedbackViewType)
    }
    
    enum FeedbackViewType {
        
        case feedbackView, rootcauseView, suggestionView, actionView
        
    }
    
    class FeedbackView: UIControl, ViewLoadable, SFSpeechRecognizerDelegate {
        
        @IBOutlet weak var textField: UITextField!
        
        static var nibName: String = NibName
        
        var feedbackViewType : FeedbackViewType = .feedbackView
        
        @IBOutlet var contentView: UIView!
        
        @IBOutlet weak var micButton: UIButton!
        
        @IBOutlet weak var micView: DefaultCardView!
        
        @IBOutlet weak var micImageView: UIImageView!
        
        weak var delegate : FeedbackViewDelegate?
        var allowTextEntry = true
        
        let speechRecognizer        = SFSpeechRecognizer(locale: Locale(identifier: "en-US"))
    
           var recognitionRequest      : SFSpeechAudioBufferRecognitionRequest?
           var recognitionTask         : SFSpeechRecognitionTask?
           let audioEngine             = AVAudioEngine()
        
        override init(frame: CGRect) {
            super.init(frame: frame)
            commonInit()
        }
        
        required public init?(coder aDecoder: NSCoder) {
            super.init(coder: aDecoder)
            commonInit()
        }
        
        init() {
            super.init(frame: CGRect.zero)
            commonInit()
        }
        
        private func commonInit() {
            Bundle(for: type(of: self)).loadNibNamed(NibName, owner: self, options: nil)
            backgroundColor = .clear
            addSubview(contentView)
            contentView.frame = self.bounds
            contentView.autoresizingMask = [.flexibleHeight, .flexibleWidth]
          
        }
        
        func configure(text: String, placeholder:String, contentType: UITextContentType,keyboardType:UIKeyboardType) {
            
            print("Did configure keyboard")
            self.textField.textContentType = contentType
            self.textField.isSecureTextEntry = (contentType == .password)
            self.textField.keyboardType = keyboardType
            self.textField.delegate = self
            self.textField.placeholder = placeholder
            if(!text.isEmpty) {
                self.textField.text = text
            }
        }
        
        
        @IBAction func btnStartSpeechToText(_ sender: UIButton) {
    //        allowTextEntry = false
            if audioEngine.isRunning {
                let audioText = textField.text
                      self.audioEngine.stop()
                DispatchQueue.main.asyncAfter(deadline: .now() + 0.2) {
                    self.textField.text = audioText
    //                self.allowTextEntry = true
                }
                textField.text = audioText
                      self.micButton.isEnabled = true
                      self.micImageView.image = UIImage(named: "mic")
                  } else {
                      print("Audio did start")
                      self.delegate?.audioDidStart(forType: self.feedbackViewType)
                      self.setupSpeech()
                      if self.startRecording() {
                          self.micImageView.image = UIImage(named: "micRed")
    
                      }
                  }
        }
        
        func stopRecording() {
    //        allowTextEntry = false
            let audioText = textField.text
            self.audioEngine.stop()
            self.recognitionRequest?.endAudio()
            DispatchQueue.main.asyncAfter(deadline: .now() + 0.2) {
                self.textField.text = audioText
    //            self.allowTextEntry = true
            }
            self.micButton.isEnabled = true
            self.micImageView.image = UIImage(named: "mic")
        }
        
        func setupSpeech() {
            
    //           self.micButton.isEnabled = false
               self.speechRecognizer?.delegate = self
    
               SFSpeechRecognizer.requestAuthorization { (authStatus) in
    
                   var isButtonEnabled = false
    
                   switch authStatus {
                   case .authorized:
                       isButtonEnabled = true
    
                   case .denied:
                       isButtonEnabled = false
                       print("User denied access to speech recognition")
    
                   case .restricted:
                       isButtonEnabled = false
                       print("Speech recognition restricted on this device")
    
                   case .notDetermined:
                       isButtonEnabled = false
                       print("Speech recognition not yet authorized")
                   }
    
                   OperationQueue.main.addOperation() {
    //                   self.micButton.isEnabled = isButtonEnabled
                   }
               }
           }
        
    //    func audioInputIsBusy(recordingFormat: AVAudioFormat) -> Bool {
    //        guard recordingFormat.sampleRate == 0 || recordingFormat.channelCount == 0 else {
    //            return false
    //        }
    //        return true
    //    }
        
        func startRecording() -> Bool {
                
                // Clear all previous session data and cancel task
                if recognitionTask != nil {
                    recognitionTask?.cancel()
                    recognitionTask = nil
                }
    
                // Create instance of audio session to record voice
                let audioSession = AVAudioSession.sharedInstance()
                do {
                    try audioSession.setCategory(AVAudioSession.Category.playAndRecord, mode: AVAudioSession.Mode.measurement, options: AVAudioSession.CategoryOptions.defaultToSpeaker)
                    try audioSession.setActive(true, options: .notifyOthersOnDeactivation)
                } catch {
                    print("audioSession properties weren't set because of an error.")
                    delegate?.showFeedbackError(title: "Sorry", message: "Mic is busy")
                    return false
                }
            
                self.recognitionRequest = SFSpeechAudioBufferRecognitionRequest()
    
                let inputNode = audioEngine.inputNode
    
                guard let recognitionRequest = recognitionRequest else {
                    fatalError("Unable to create an SFSpeechAudioBufferRecognitionRequest object")
                }
    
                recognitionRequest.shouldReportPartialResults = true
    
                self.recognitionTask = speechRecognizer?.recognitionTask(with: recognitionRequest, resultHandler: { (result, error) in
    
                    var isFinal = false
    
                    if result != nil {
    
                        self.textField.text = result?.bestTranscription.formattedString
                        isFinal = (result?.isFinal)!
                    }
    
                    if error != nil || isFinal {
    
                        self.audioEngine.stop()
                        inputNode.removeTap(onBus: 0)
                        self.recognitionRequest = nil
                        self.recognitionTask = nil
                        self.micButton.isEnabled = true
                    }
                })
            
                let recordingFormat = inputNode.outputFormat(forBus: 0)
                inputNode.installTap(onBus: 0, bufferSize: 1024, format: recordingFormat) { (buffer, when) in
                    self.recognitionRequest?.append(buffer)
                }
    
                self.audioEngine.prepare()
    
                do {
                    try self.audioEngine.start()
                } catch {
                    print("audioEngine couldn't start because of an error.")
                    delegate?.showFeedbackError(title: "Sorry", message: "Your microphone is used somewhere else")
                    return false
                }
    
                self.textField.text = ""
            return true
            }
        
        func speechRecognizer(_ speechRecognizer: SFSpeechRecognizer, availabilityDidChange available: Bool) {
            if available {
                self.micButton.isEnabled = true
            } else {
                self.micButton.isEnabled = false
            }
        }
        
    
    }
    
    extension FeedbackView: UITextFieldDelegate {
        
        func textFieldShouldReturn(_ textField: UITextField) -> Bool {
            self.endEditing(true)
            return false
        }
        
        func textField(_ textField: UITextField, shouldChangeCharactersIn range: NSRange, replacementString string: String) -> Bool {
            return allowTextEntry
        }
    }
    
    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search