Kudos to the cool tool tflite2tensorflow made by PINTO0309.Thanks StuartIanNaylor for the discussion and help on various topics ranging from the idea of multiprocessing and trouble shooting alsa configs.This project is heavily based on the DTLN and DTLN-aec projects by breizhn.
RASPBERRY PI ACOUSTIC ECHO CANCELLATION INSTALL
RASPBERRY PI ACOUSTIC ECHO CANCELLATION BLUETOOTH
Also it can adapte to different delays so you may try it with bluetooth speaker. It can work even when the input has gone through other preprocessing stages like NS or another AEC. One huge benefit of DTLN AEC comparing to traditional AEC is that it's more robust.Note that in aec_mp.py, the processing time is capped by interpreter2 inference time so fft processing time in the parallel process doesn't matter. If pyfftw is installed, the script will use pyfftw instead of np.fft, which can gives a negligible ~0.2ms speedup.If you are using an older version of Python, please do pip3 install shared-memor圓8 first to install the backport module. The aec_mp.py script uses multiprocessing.shared_memory, which requires Python 3.8+.In my experience, it's better to use models < 5ms, otherwise you may see some "output underflow" messages (sometimes increase -latency may help). So the processing time must be below 8ms to work realtime. Under the sample rate 16000, that is 8ms. Note that all DTLN models were trained with block of 128 samples. This table is the evaluation of processing time on my Raspberry Pi 3B+ with 64bit OS. Below is an sample input/output with my ReSpeaker 2-Mics Pi HAT (run with 256 model). Now look at recorded audio file, music should be removed.įor testing, you may also use the -save option to save input audio to /tmp/aec_in.wav and AEC output to /tmp/aec_out.wav for inspection. Record from AEC virtual device: arecord -D aec -f S16_LE -r 16000 -c 1 -V mono rec.wav.Run AEC script with: python3 aec.py -m 128 -i aec_internal -o aec_internal.Play some music to AEC virtual device: aplay -D aec music.wav.Refer to the instruction there to setup and configure, then you will have two additional alsa interfaces: aec and aec_internal. I made a ALSA AEC plugin that can achieve this. When you don't have a soundcard that supports hardware loopback, you need to create a virtual input device whose last channel stores playback loopback. Follow the similar procedure in DTLN NS setup to put AEC output to a virtual capturing device.Speak to your mic, you should hear no feedback echo. Test with python3 ns.py -i UAC1.0 -o UAC1.0 -c 6 -m 128.Note down a unique substring of your soundcard's name. In my case is the Respeaker USB Mic Array V2.0, which has 6 input channels and the last one is the playback loopback. You need to have a sound card which supports hardware loopback, and the loopback is on the last channel of captured audio. aec_mp.py is a multiprocessing version, it runs close to 2x faster on 256/512 models.It assumes the input device contains a channel as loopback/reference. aec.py takes a pair of devices as input and output.? is the number of LSTM units, larger means slower but supposed to be better. models/dtln_aec_?_quant* are quantized models.To make it realtime, I converted models to quantized models and created two realtime scripts: It currently only has a file-based demo script with tflite (not quantized) models. Reboot and record some audio from hw:Loopback,1 to see if DTLN NS is running and taking effect.Add the dtln_ns.service to /etc/systemd/user/ and enable it with systemctl -global enable dtln_ns.Copy ns.py to /usr/local/bin/ns and chmod +x.You should noice obvious noise removal and clear voice. Run arecord -D hw:Loopback,1 -f float_le -r 16000 -c 1 -V mono rec.wav in a separate shell to record denoised audio.If you see a lot of "input underflow" try to adjust the latency for a higher value, e.g., -latency 0.5. If your processing time is longer you may need a more powerful device. Run DTLN with python3 ns.py -o 'Loopback ,0' -measure, you should see processing times Now check arecord -l, you should able to see two new Loopback devices.You may want to add a line snd-aloop in /etc/modules to automatically enable it on boot. Enable snd-aloop with sudo modprobe snd_aloop.I add a few useful options in the ns.py based on the orginal script. This is simple as the DTLN project already provides a realtime script for handling data from/to audio devices. The target of this project is to integrate and use two amazing pretrained models DTLN and DTLN-aec on Raspberry Pi for realtime noise suppression (NS) and/or acoustic echo cancellation (AEC) tasks.