Close
0%
0%

digital-walkie-talkie

long range, low power, modular

Similar projects worth following
The final goal is to have a long range, low power, low bandwidth, decent audio quality and low cost radio.Before attempting that build, some experimentation will be done around the main building blocks of this project:
* voice codec
* digital data transmission : reliability
* audio quality: needed bitrate
We'll start of using existing hardware : a laptop and a Wandboard, both are equipped with audio inputs and outputs and a network connection. That's the minimal setup for the application.First experiments will be done in Python. It has the advantage that it's simple, popular and widely documented . The same Python code runs on desktop and on the embedded platform.

Progress

Voice codec

☑ Selection of voice codec : Codec2
☑ Implementing voice codec on embedded platform : esp32-codec2
☐ Making unit test for voice codec
☐ Turning Codec2 into a standalone Arduino library, which will allow for easier integration by third parties.

Audio streaming

☑ Audio playback : Sine output by I²S on ESP32's internal DAC : esp32-dds (direct digital synthesis)
☑ Real time Codec2 decoding and audio output on ESP32's internal DAC : esp32-codec2-DAC
Audio capture (through I²S)
Output sine wave to external I2S Audio codec (i.e. SGTL5000)
Decode Codec2 packets in real time and output them on SGTL5000 headphone and line out. The Codec2 decoding and audio streaming is all done in tasks. The 'loop'-function has nothing to do.
Audio feed-through using SGTL5000 : it took some tweaking to adjust the input audio level to line-in levels of the SGTL5000 and headphone output volume settings.  I2S-peripheral works full duplex here, while ESP32 documentation only mentions half-duplex operation.
Real time codec2 encoding analog audio from SGTL5000's line input.  Codec packets are printed real time in base64 format to serial port
☐ Audio filtering in SGTL5000, which codec2 should benefit from.
☑ Half-duplex operation : every few seconds the codec switches between encoding and decoding.  It decodes packets stored in flash.  It encodes audio from the SGTL5000 codec.
☑ Refactoring encoding/decoding of packets.  Codec2-engine now has two separate queues for output and two separate queues for input.  Semaphores have been removed as they made the code unnecessarily complicated.

Wireless communication

☑ Generating some RTTTL music and transmitting it with the SX1278 FSK-modem on 434MHz.  The RSP1A decodes it fine using CubicSDR.  It's not very useful, but it's fun.  Using PDM, we might even be able to play rudimentary audio.
☑ SX1278 modules using RadioLib : LoRa, FSK and OOK.
☑ SI4463 module using RadioHead : 2GFSK
☑ SI4463 module using Zak Kemble's SI4463 library : 4GFSK in 6.25kHz channel spacing
☑ Adding SI4463 to RadioLib library (basic RX/TX works, but the code needs a lot of clean up)
☑ Adding RSSI readout to SI4463 RadioLib library (in preparation of antenna comparison tests)
☑ Deriving from Arduino Stream class in wireless library + interrupt based.  Based on Zak Kemble's library and arduino-LoRa.  Abandoning RadioLib.  This makes it possible to add existing overhead protocols such as PacketIO or nanopb-arduino (Google Protocol Buffers). 
☑ Expanding packet size beyond 128bytes to improve air rate efficiency.  This implementation can send packets of 400 payload bytes or more, unlike many other si4463 libraries on github which are limited to 255 or 129 bytes or even less.  The implementation can be found here : si4463-stream.
☐ Introducing FX.25 FEC (forward error correction) to reduce packet loss on bad quality links.
☑ Implement this as a KISS modem for https://github.com/sh123/codec2_talkie. Source code on : arduino-kiss.

Audio & wireless combined

The implementation of sending raw audio packets works, but it's not compatible to similar solutions.  It's most likely that the further development of audio & wireless combined will be abandoned in favor of a solutions like codec2_talkie or m17-kiss-ht.

☑ One way radio : transmitter station sends codec2 packets, while receiving station decodes them and plays them through the SGTL5000 on the headphone.
☑ Two way radio with PTT : both stations run the same code. When PTT-button is pushed, the station starts encoding audio from line-in of the SGTL5000 and broadcasts them using the SI4463. The other station receives the packets, decodes them, and plays the SGTL5000.  The custom main.cpp source file is less than 150 lines long.  The remainder of the code consists of reusable libraries.

Security

Implemented using libsodium :

☑ Authenticated key exchange...

Read more »

TK-3201(ET)-English.pdf

User Manual Kenwood TK-3201 PMR446 radio

Adobe Portable Document Format - 559.56 kB - 01/02/2021 at 18:55

Preview
Download

youtube_codec2_1200.wav

Audio from a Youtube video run through codec2

x-wav - 1.83 MB - 09/09/2020 at 19:41

Download

ve9qrp.txt

audio transcription

plain - 1.15 kB - 08/28/2020 at 18:54

Download

ve9qrp.wav

original voice sample

x-wav - 1.72 MB - 08/09/2020 at 14:56

Download

ve9qrp_codec2_1200.wav

voice sample passed through codec2 voice codec

x-wav - 1.72 MB - 08/09/2020 at 14:56

Download

  • Arduino KISS interface

    Christoph Tack11/07/2021 at 19:29 0 comments

    Trying to avoid to reinvent the wheel, so I found some KISS sources:

    A lot of time and effort has been put in these firmwares, but...  For implementing a KISS interface, shouldn't the code be KISS as well?  These source codes are not simple (relying on AFSK and/or AX.25) makes the class complicated and hard to port to other platforms or tool chains.  The firmwares are not for the stupid either.  The Mobilinkd C++ code may be beautifully written, but it's hard to read for novice C++ programmers.

    So I'll need to write my own code based on the PacketSerial and HamShield_KISS libraries while adhering as much as possible to the KISS-"standard".

    Android

    As mentioned elsewhere, instead of implementing all functionality in a single device, I'll offload functionality to an Android device.  These apps are already there, so why not use them?  Why would I decode Codec2 samples in the ESP32 when there's already an Android app that can do that for me?

    The only thing needed is a KISS-TNC interface and a Bluetooth serial port to talk to these Android apps.  This is not so hard to do.

    Codec2 Talkie

    I have tested my KISS-TNC code with the sh123 android app to send and receive Codec2 speech samples.  Codec2 samples sent by the ESP32 were correctly decoded and output on the Android app.  I've also off-line decoded Codec2 samples that the ESP32 received from the Android app.

    Aprsdroid

    Sending voice with Codec2 Talkie is one thing, but it would be nice to have a messenger application with the added bonus of sharing location info.  That is what Aprsdroid can do for us.

    I'll mention Ripple Messenger (and LoRa Qwerty Messenger) here as an alternative, but that application looks like a single person project.

    APRS uses only a subset of the AX.25-framework.  And Aprsdroid in its turn only uses a subset of the APRS-protocol.  So we can suffice by only implementing a very limited set of the AX.25-framework.
    Moreover, the data we receive over the KISS-interface doesn't carry flags, has no checksum and isn't bit-stuffed either.  This also saves us a considerable amount of work.

    Unfortunately, the APRS-protocol is a real mess.  It seems like the standard has been written after the fact, trying to match the standard to different applications already seeing field use.

    Settings for Aprsdroid to communicate with our TNC. (cdn.hackaday.io/images/5671441640889476886.jpg)

    Location reports

    AX.25-frame, received as Data Frame by our KISS-TNC :

    0x82 0xa0 0x88 0xa4 0x62 0x6c 0xe0 0x9c 0x60 0x86 0x82 0x98 0x98 0x61 0x03 0xf0 0x3d 0x30 0x31 0x30 0x30 0x2e 0x30 0x30 0x4e 0x5c 0x30 0x30 0x32 0x30 0x30 0x2e 0x30 0x30 0x45 0x29 0x20 0x68 0x74 0x74 0x70 0x73 0x3a 0x2f 0x2f 0x61 0x70 0x72 0x73 0x64 0x72 0x6f 0x69 0x64 0x2e 0x6f 0x72 0x67 0x2f 

    Decoded by AX.25 Frame Generator :

    Source & SSID: N0CALL-0
    Destination & SSID: APDR16-0
    PID : 0xF0 (no layer3 protocol)
    CONTROL : 0x03 (UI Frame)
    Payload: =0100.00N\00200.00E) https://aprsdroid.org/
    

    We can also send our own location reports to Aprsdroid:

    cdn.hackaday.io/images/4244511641152187114.jpg
    cdn.hackaday.io/images/5451181641152204611.jpg
    Fooling Aprsdroid by letting it know that we're in Ushuaia.(cdn.hackaday.io/images/5451181641152204611.jpg)

    Text message

    More info can be found in the APRS-specification, chapter 14.  Aprsdroid expects messages to be acknowledged, hence the '{' followed by a number at the end of the message.

    AX.25-frame,...

    Read more »

  • Security

    Christoph Tack07/25/2021 at 09:49 0 comments

    The advantage of using digital communication over analog is that it's much easier to implement decent security measures.  Security deals with the following properties of the information:

    • secrecy or confidentiality
    • authentication
    • non-repudiation
    • integrity control

    We won't reinvent the warm water here.  We'll see what TLS1.3 and SSH have to offer us. 

    Key exchange

    TLS leaves many options here, some of which are interesting to us: PSK and ECDHE and a combination of the two.  In pre-shared key (PSK) mode, a pre-shared secret is established prior to key-exchange.  This can be done using an out-of-band secure channel : serial cable, NFC, ...  The problem with this approach is that it doesn't scale well.  The pre-shared secret should be unique for each combination of two devices.  If you have 20 devices, you'll end up generating and distributing 20!/(2!*18!) = 190 unique pre-shared secrets.

    ECDHE (Elliptic Curve Diffie Hellman with Ephemeral keys) is easier to manage.  Each device in the group could be uploaded with a list of "certificates".  This list could be store on an SD-card in the device.  The list would be the same for all devices in the pool and it doesn't even need to be secret.  WiFi could be used to upload the list to the devices.

    Message 1 : Client to server

    TLS1.3 as well as SSH start with the client that creates an ephemeral key pair.  It then sends a random number (to prevent replay-attacks) and the ephemeral public key to the server. 

    The ephemeral key pair is only used for key exchange.  Its lifetime is very short. 

    SSH

    In SSH this actually takes two different messages.

    • Random number = cookie, 16bytes long
    • Ephemeral public key = e (in SSH_MSG_KEXDH_INIT message)

    TLS1.3

    ClientHello message

    • Random : client_nonce = 32bytes
    • Extension : key_share : client's ephemeral public key

    Message 2 : Server to Client

    The server creates an ephemeral key pair  as well.  For both protocols, the server's first message contains about the same arguments as the client's first message.  The server then adds additional data.

    SSH

    • Random number = cookie, 16bytes long
    • Ephemeral public key = f (in SSH_MSG_KEXDH_REPLY message)

    Additional data:

    • the server's static public key K_S
    • signature: signing the hash of all known key exchange data up to now and signing it with K_S.

    TLS1.3

    ServerHello message:

    1. Random : server_nonce = 32 bytes
    2. Extension: key_share : server's ephemeral public key

    Additional data: EncryptedExtension, encrypted with keys derived from handshake secret.

    Mutual authentication

    As shown above, the key exchange only provides server authentication.  The server has no way of telling who the client really is.  Implementation of mutual authentication in TLS1.3, simply requires an extra message, with the client sending similar EncryptedExtension data.

    Mutual authentication in SSH requires the SSH authentication protocol (RFC4252), which is a different message flow.  In short, the client sends its public key and attaches a signature to prove possession of the private key.

    Implementation

    As the Arduino tool chain for ESP32 already includes libsodium, we'll use that.  Monocypher can't be used with ESP32 because of linker conflicts with the libsodium library.

  • Housing

    Christoph Tack07/18/2021 at 20:25 0 comments

    As this project doesn't have a real use case yet, there's no requirement about the housing.

    Radio

    The original idea was to build the electronics in a Tongboxin C803 radio.

    Unfortunately, there's very little room for electronics.  The left side of the housing is taken up by the speaker.  The bottom side is taken up by the 18650-cells.

    The LED-segment display is soldered onto the PCB and needs to be de-soldered for taking the electronics from the housing.

    Power bank

    Another option is to use a power bank housing.  These are fairly cheap, already contain room for some 18650 cells.  The solar panel is an extra, but probably won't be of much use.  The LED-panel on the back might be replaced by some TFT-panel or LCD-panel.  It's transparent anyway.

    One of the things that would put me down is the probably low build quality.  There are no screws to hold it all together.

    Well the housing has finally arrived.  It's indeed low build quality.  The housing is held together by four miniature screws on either side of the housing.  There's a multitude of issues with this housing, but the main one is that it's not easily possible to mount connectors on the sides (SMA, audio jacks).  There's also no means of fixing the 18650 cells.  You're supposed to weld cells together.  The housing is too low to use an 18650 battery clip.

    The charging PCB is mounted fix, but there are no provisions to fix mount anything else.

    Extruded aluminium housing

    YGKT on AliExpress sells extruded Al housings for a reasonable price.  The metal housing will also serve as a ground plane for the monopole antenna.

    I expect to need about 6000mm² PCB area.  Three options remain:

  • Data link layer

    Christoph Tack04/22/2021 at 18:08 3 comments

    Packeting

    Packet interval

    Codec2 1200bps has been selected, it needs to be fed 6 bytes every 40ms.

    dPMR uses packets that are (Header (80ms) + 4* super frame(320ms) + end (20ms)) = 1.38s long!  Using such long packets has the advantage that the overhead is relatively small for the payload.  This also implies that the FIFO is refilled as the transmission is ongoing. 

    SCIP-210, Revision 3.6 §2.1.3 : Transport framing : All frames are split up in 20 byte frames, of which 13 bytes are data.

    Packet size

    The raw data rate of Codec2 is 1200baud.  If consider that raw data will only make up 25% of the total packet interval, then we'lll need to send at least at 4800baud.  The remainder of the packet interval goes up on:

    • inter-packet dead time
    • intra packet overhead for data link layer : preamble, sync word, CRC, ...
    • intra packet overhead for transport layer (security)

    If we want to adhere more or less to dPMR, we'll want to use 6.25kHz channels.  4800baud FSK needs more than 6.25kHz bandwidth, so we'll need more bits/symbol : 4(G)FSK.

    This only leaves the SI4463 and AX5043 as options.

    For the 1200bps, FSK and OOK are still options:

    1. SX1278 : FSK : 2.4kbps BR, 4.8kHz freq.dev., 7.8kHz Rx BW.
    2. SX1278 : OOK : 3.0kbps, 5.2kHz Rx BW.

    Is there a suitable library for the SI4463?

    • RadioHead library can send 4FSK data (with a suitable config file), but can't receive it. 
    • The #NPR New Packet Radio project is 2FSK as well as 4FSK, but it might be difficult to strip the radio code from the application.  The interfacing to the SI4463 is very different from other sources.  The application code seems very much interweaved with the interface to the radio.
    • Zak Kemble's library was the first one I got working with 4GFSK.  But it's interrupt based and many functions don't yield a return code.
    • The official SiLabs WDS3 tool can create an example project.  Unfortunately the header files are nearly unusable.  A header file with commands is generated, which is about 3800(!) line long.  Then there's also the header file listing the properties.  That one is 5800(!) lines long.  I spend more time finding the right "define" statement than it would have taken me to write the statement myself based on the HTML-documentation.
    • The Arduino-LoRa library interface can be used as a template.  It inherits from the Stream class, which will make it easier to interface it to other libraries such as PacketIO.

    So I decided to merge Zak's code and the official WDS3 code into my favorite radio library : RadioLib.

    Now with the library working (based on Zak Kemble's code), I noticed that sending the 10byte packes from Zak Kemble's example takes 57ms.  That's measured from the end of the 0x31 START_TX command to the falling edge of IRQ that signals a PACKET_SENT.  For 1200bps, we need to send 6 bytes every 40ms.  If we can't get the TX-time down, we'll have to group codec2 frames in a single wireless packet.  Sending 6 bytes takes 51ms (as verified with the logic analyser: time between end of START_TX and falling PACKET_SENT IRQ).  This matches with the theoretical limit: 4 bytes in 6ms = 32 bit/6ms = 5.3kbps.  The radio is configured for 2.4ksymbols/s (=4.8kbps for 4GFSK).

    SI446x potential packet structure

    The following settings are used in Zak Kemble's library:

    1. Preamble : 8 bytes (sine wave) : 2.4kbps encoded, not 4.8kbps as the rest of the packet. 
    2. Sync word : 2 bytes
    3. Field 1 : 1 byte (length of the packet)
    4. CRC-Field 1 : 2 bytes
    5. Field 2 : data bytes (e.g. 6 bytes)
    6. CRC-Field 2 : 2 bytes

    So we have 15 bytes overhead for our packet.  With respect to time, we even have 23 bytes overhead, because the preamble is sent out at half the bit rate.  So the total packet time = (23 + N) * 8 / 4800 [s], where N is the number of data bytes. 

    Recording taken with RSP1A and CubicSDR opened in Audacity.  The selected length...
    Read more »

  • Audio line level

    Christoph Tack01/24/2021 at 19:35 0 comments

    Codec2 expects nominal signal levels to be able to decode data.  So for testing how the codec2 encodes our packets, the PC will generate speech audio on its line-out (what voltage level to use here?), which is connected to the left line-in of the SGTL5000 audio codec, which will convert the audio voltage levels to 16bit PCM signed samples.

    Maximum audio output voltage level

    As a laptop only has a headpone output, no line out, I used an external sound device.  The cheapest possible USB-audio card has been use here.  It only costs €2.

    To find the maximum amplitude it can deliver, we download a 1kHz sine wave 0dB file (maximum amplitude).  The values of the audio samples vary from -1 to +1.

    https://cdn.hackaday.io/images/7886221611515142020.png
    1kHz 0dB wave file in Audacity

    Play it and set your computer sound volume to maximum. Then measure the amplitude. If the wave form starts clipping, then there's a problem in your audio system.

    USB Soundcard at maximum amplitude, oscilloscope snapshot

    The unloaded headphone output of the Lenovo L580 Thinkpad even goes up to 1.68Vp (=3.32Vpp).  Remark that the SGTL5000 only accepts up to 2.83Vpp (=1Vrms) line-in voltage levels.

    Ok, so now we know that different audio sources have different maximum voltage settings.

    Nominal signal level

    Maximum signal level is -1 to +1, but what should we use as nominal signal level?  Let's download a speech sample from a news report, that one should be set correctly.

    Audacity view from news report, signal level limited from -0.7 to +0.7.

    SGTL5000 audio codec

    The analog gain stage before the ADC (controlled by the CHIP_ANA_ADC_CTRL register) of the SGTL5000 will need to be adjusted so that when a 0dB sine wave is played at maximum amplitude from the USB-sound card, it will result in 16bit samples that are also maximum amplitude.

    Let's take a 100Hz sine wave, 0dB so that we have at least 80 samples per cycle.  Remember we're using 8kHz sampling frequency because that's a codec2 requirement.  Of course we might sample at higher frequencies, but then the ESP32 would have to down sample again.

    The I2S samples could be printed to the Arduino serial plotter to get an idea of the amplitude.

  • ESP32 with SGTL5000

    Christoph Tack01/03/2021 at 14:36 0 comments

    Hardware

    The SGTL5000 uses a virtual ground for the audio outputs.  This likely makes it unsuitable for use in smartphone headsets in which the ground of the microphone is shared with the audio output.  To be tested.

    1. NodeMCU-32S
    2. Adafruit 1780 : Adafruit Accessories Audio Adapter Board for Teensy

    Generating I2S

    The annoying thing about the Adafruit audio adapter is that it's not fully open source.  These are the supply voltages:

    • VDDD = 1.8V
    • VDDIO = 3.3V (powers line out)
    • VDDA = 3.3V (powers the headphone)
    https://cdn.hackaday.io/images/6563761610045017903.png
    I2S standard format : data is one bit delayed with respect to WS edges.

    On the SGTL5000 datasheet, this is the one bit delay on I2S format with respect to the left-justified format on Figure 10. I2S Port Supported Formats.  This seems to be normal I2S behavior. 

    The delay could be removed by setting the i2s_comm_format_t in the ESP32 to 0, but I'll just leave it to the standard setting.

    The SGTL5000 considers the 16bit data as signed format.  It's analog output is inverted, which actually doesn't matter much for audio.  Voutmax corresponds to 0x8000 = -32768, while Voutmin corresponds to 0x7FFF = 32767).

    The sample code to generate a 200Hz sine wave on the left channel of line-out and headphone can be found here.

    References

    1. SGTL5000 driver on Github (by PJRC)
    2. Interfacing an audio codec with ESP32 – Part 1 and Interfacing an Audio Codec with ESP32 – Part 2
    3. Audio Adaptor Boards for Teensy 3.x and Teensy 4.x
    4. ESP32 I2S Internet Radio (with software MP3 decoding inside ESP32)
    5. esp32_audio
    6. ESP32-2-Way-Audio-Relay

  • References

    Christoph Tack09/16/2020 at 19:43 0 comments

    Projects using the SI4463

    Commercial products

    Kiwi-tec LAP-E01

    Prior art

    nRF24 based

    1. Long Range Arduino Based Walkie Talkie using nRF24L01 : many similar projects, all using the RF24Audio library.

    RFM12 based

    1. Walkie Talkie Duino using RFM12B "open source" (only to the Kickstarter backers).

    RFM22 based

    Codec2WalkieTalkie

    Analog FM based

    1. DRA818V analog FM module (lots of harmonics, will need a license to operate)
    2. Auctus A1846S : HamShield (by Casey Halverson), also available in Mini version.
      1. HamShield on Tindie
      2. Kickstarter
      3. Hackaday.io
      4. Instructables
      5. Github
      6. InductiveTwig

    Comparable projects

    KISS modem interface

    LoRa

    HamShield LoRa (by Casey Halverson)

    STM32

    ESP32

  • Hardware choices

    Christoph Tack09/15/2020 at 18:09 0 comments

    Platform

    Flash size requirements

    The codec2 library needs about 87KB, while the RadioLib needs about 10KB.  Then, there's also the base Arduino libraries.  And we still need to add our own code.  To be on the safe side, a device with at least 256KB of flash will be needed.

    ESP32

    Test application built, based on Arduino codec2 library, but it crashed.  This has been solved with esp32-codec2.

    Some presumably also got it to work before I did, but they are unwilling to share their source code:


    STM32

    STM32F4Discovery

    Rowetel/Dragino Tech SM1000

    NUCLEO-L432KC

    Runs only on 80MHz, might be an option to shrink size.

    More info on STM32 development

    nRF52

    64MHz

    github : Implements codec2 on a Adafruit Feather nRF52 Bluefruit LE.

    Codec2 has been modified so that it can be built using Arduino framework.  I doubt this implementation is working correctly.

    Audio IO

    I²S

    On ESP32, using I²S is definitely advantageous because it can use DMA, which off-loads the reading and writing audio data from the processor.

    As we're only processing low quality 8kHz speech here, a high-end audio codec like the SGTL5000 is not necessary, but it might be a good choice after all:

    1. open source support (pjrc)
    2. I²S sink & source in a single device.
    3. High quality audio might be useful for other projects and designs.
    4. Extra features:
      1. Input: Programmable MIC gain, Auto input volume control
      2. Output: 98dB SNR output, digital volume
    5. Development board price is acceptable.

    A cheaper alternative is the Waveshare WM8960 Audio HAT (technical info).

    PWM-DAC & ADC

    The SM1000 and NucleoTNC contain the analog circuitry we need.

    Adafruit Voice Changer also features some form of audio pass-through

  • Speech codec

    Christoph Tack08/28/2020 at 19:16 0 comments

    Codec options

    Codec2

    Opus

    • open source, royalty free
    • replacement for Speex
    • down to 6kbps
    • used in VoIP-applications (e.g. WhatsApp)

    MELPe

    • NATO standard
    • licensed & copyrighted

    Speech-to-text-to-speech

    Using a speech codec, data transmission can be brought down to about 1200bps.  But how can we reduce data even further?  Let's take a 1min52s mono 8kHz speech sample as an example

    1. ve9qrp.wav : 1.799.212 bytes : 128000bps
    2. ve9qrp.bin : codec2 1200bps encoded : 16.866 bytes : 1200bps
    3. ve9qrp.txt : audio transcription of ve9qpr.wav : 1.178 bytes : 85bps

    Using codec2, we get a 106/1 compression ratio, using text 1527/1 compression ratio.  That's almost fifteen times better!

    If we use speech recognition on the transmit side, then send the transcription over and use speech synthesis on the receiving side, this might work.

    Speech recognition

    https://pypi.org/project/SpeechRecognition/

    Even though speech recognition engines are very good these days, they're still not as good as human beings.  The transcript text could be shown to the speaker during transmission.  In case of errors, the speaker could repeat the incorrect word or spell out its characters.

    A speech engine also requires a language preset.  That shouldn't be too much of a hurdle because most of us only commonly use a single language.

    Is there good speech recognition software that runs offline?

    Is speech recognition software not too power hungry?

    Speech synthesis

    Using Linux command line tools


    Codec2 Configuration

    As the walkie talkie will use digital voice transmission, we need a way to digitize speech.  Several open source speech codecs are available.  We will focus on low-bitrate codecs because we want long range.  Opus and Speex won't do.  There's one codec that excels : codec2 :

    • bitrates as low as 700bps possible (but not usable, see Codec2 evaluation)
    • open source
    • existing implementation on PC, STM32 and nRF52
    • used in ham radio applications (e.g. FreeDV)

    Codec2 technical details

    Audio input format

    16bit signed integer, 8kHz sample rate, mono

    Codec2 packet details

    Reference : codec2 source code

    Encoded Data rate [bps]
    Bits/packetBytes/packetTime interval [ms]
    Packets/s
    320064820
    50
    240048620
    50
    160064840
    25
    140056740
    25
    120048640
    25

    When using one of the lowest three data rates, there's a drawback that loosing a single packet will cost you 40ms of audio.

  • Physical layer : wireless communication

    Christoph Tack08/26/2020 at 19:25 0 comments

    Theory

    Channel capacity

    Shannon-Hartley law: where C is channel capacity [bps], B is bandwidth [Hz] and S/N is signal/noise ratio.

    Example: dPMR (C = 4800bps, B=6250Hz).  So S/N must be at least 0.70.

    Number of bits/symbol needed

    Nyquist's Theorem :

    Example: dPMR (C = 4800bps, B=6250Hz).  So N is 1.3bits/symbol.

    Noise floor

    –174 dBm is the thermal noise floor at room temperature in a 1-Hz bandwidth.

    e.g. for 10kHz bandwidth, the noise floor is -134dBm.

    (see Long-range RF communication: Why narrowband is the de facto standard, Texas Instruments)


    Legal limitations

    We can only make use of unlicensed bands.  Some bands only allow pre-certified equipment and fixed antennas.  Here are some options for unlicensed spectrum.  I left out the <100mW options and constrained myself to the sub-1GHz options.  If you're looking for a DIY-solution for 2.4GHz, have a look at the nRF24Audio library.

    27MHz

    1. Citizen band : 12W, SSB, 10kHz channels, 26.690MHz to 27.410MHz, some channels excluded
      1. Packet radio Germany : 27.235 MHz and 27.245 MHz
      2. Packet radio Netherlands: 27.235 MHz and 27.395(wikipedia)/27.405 MHz
      3. Packet radio Belgium : forbidden
    2. SRD : 100mW, 5 10kHz wide channels around 27MHz, <0.1% duty cycle

    169MHz

    1. SRD : 0.5W, 169.4MHz to 169.475MHz, 50kHz channels, <1% duty cycle : BIPT B01-10

    446MHz

    1. PMR446 : 0.5W, ,6.25kHz or 12.5kHz, 446MHz to 446.2MHz
      1. dPMR446 aka dPMR tier 1, ETSI TS 102 490 & ETSI TS 102 587.

    823-832MHz

    1. Intercom : 100mW, BW<200kHz

    865-868MHz & 874-874.4MHz & 917.3-918.9MHz

    1. SRD : 0.5W, BW<200kHz, divided into 4 allowable sub bands, <2.5% duty cycle

    869.4-869.65MHz

    1. SRD860 : 0.5W, BW<250kHz, <10% duty cycle

    Side note

    Polite spectrum access = listen before transmit (LBT) and adaptive frequency agility (AFA).

    As contradictory as it might seem, LBT+AFA is no benefit over 10% duty cycle.  It restricts the system to 100s per hour per 200kHz bandwidth (=2.8% duty cycle), a maximum transmission time of 4s and so on... (see ETSI EN 300 220-1 V3.1.1 (2017-02), 5.21.3.1).


    Modulation types

    LoRa

    LoRa is a wide band modulation (> 125kHz), which forces us to keep duty cycles <10%.  To get enough throughput, SF6 would have to be used.  When calculating the air time, we can achieve similar air times as narrow-band 4GFSK, but LoRa will need many times (x20) the 4GFSK to be able to do that.

    Background on LoRa

    Possible ICs : SX1278/RFM98

    Parameters

    The code used is here.  The client keeps sending data to the server.  The server acknowledges each packet.  Every 10s, the server prints out a report.

    The RSSI is low because both modules are connected to a u.fl/SMA cable assembly which ends in a 50ohm load.  These cable assemblies don't perform well.  The signal level could be dropped further by removing the cable assemblies.

    RadioHead library, reliable datagram

    • Bw = 125 kHz, Cr = 4/5, Sf7 = 128chips/symbol, CRC on :
      • 10 byte/frame : Total bytes : 1160      Total packets : 116     Bitrate : 928bps   Average RSSI : -112.82  Average SNR : 6.78
      • 30 bytes/frame : Total bytes : 2700      Total packets : 90      Bitrate : 2160bps  Average RSSI : -114.97  Average SNR : 5.02
      • 60 bytes/frame : Total bytes : 3900      Total packets : 65      Bitrate : 3120bps  Average RSSI : -110.23  Average SNR : 7.68
    • Bw = 500 kHz, Cr = 4/5, Sf7 = 128chips/symbol, CRC on :
      • 10 bytes/frame : Total bytes : 4680      Total packets : 468     Bitrate : 3744bps  Average RSSI : -110.21  Average SNR : 0.70
      • 30 bytes/frame : Total bytes : 10200     Total packets : 340     Bitrate : 8160bps  Average RSSI : -108.82  Average SNR : 1.40
      • 60 bytes/frame : Total bytes : 14880    ...
    Read more »

View all 13 project logs

Enjoy this project?

Share

Discussions

Christoph Tack wrote 12/10/2023 at 18:42 point

Reticulum works with any KISS device, as long as it allows for a minimum MTU of xx bytes (I forgot the number).
Anyway: If you want support from the Reticulum creator, you'd better use RNode hardware (https://markqvist.github.io/Reticulum/manual/hardware.html)

  Are you sure? yes | no

aaaaaa wrote 12/10/2023 at 16:54 point

look this https://reticulum.network/index_pl.html

is possible using it on this dev? and free band? lora , 433 etc

  Are you sure? yes | no

Christoph Tack wrote 03/18/2022 at 19:29 point

Hi Jon, the building blocks are there (si4463 interface to MCU with support for packets >128bytes, codec2 decoding/encoding, analog audio interface, encryption), but I'm still doubting around the wireless protocol to be used and about the final implementation: either as a custom standalone unit or as an add-on to a smartphone running aprsdroid/codec2talkie. 
The last option allows for integration and cooperation into existing tools (such as unsigned.io rnode), while it has the drawback that it requires the use of a smartphone.

I'd like to re-use existing protocols where possible.  I've implemented APRS, because it seems to be a standard among hams, but it's an awful protocol.  Maybe I'll have a look into Google protocolbuffers.

  Are you sure? yes | no

jon.scheer wrote 03/17/2022 at 21:52 point

Hello.  Just curious as to what the current status of the project is...?  Cheers.

  Are you sure? yes | no

Simon Merrett wrote 01/14/2021 at 21:45 point

It's coming along well - good work! 

  Are you sure? yes | no

Christoph Tack wrote 11/12/2020 at 12:55 point

If I wanted to use Opus over wifi, the easiest solution would be to open WhatsApp on my smartphone and start a call, wouldn't it?  I want to try to improve on the common VHF/UHF HT.  Wifi isn't suitable for that because of its limited range and bad penetration through buildings.  You could use directional antennas, but how will you keep them aligned?  I opted for codec2 because it also works on very low bitrates (<6kbps).  Lower bitrates also lead to a longer range.
You're right that I mesh-protocol won't do for voice comms.  There'll be too much latency and throughput will be an issue as well.  I'm aware of the Disaster Radio and meshtastic project, but I think there's little I can reuse from them.

  Are you sure? yes | no

Daniel Dunn wrote 11/11/2020 at 20:00 point

What about using the Opus codec via WiFi?   It would be near-impossible to switch between them automatically in a multicast environment, but you could let the user decide.

Mesh infrastructure like BATMAN is probably going to be plenty fast for 48kbps Opus, and it will take so much of the load off 915Mhz, which we all need to try hard not to totally trash.

  Are you sure? yes | no

Christoph Tack wrote 10/28/2020 at 19:48 point

I'll first try to get my hands on a STM32F4Discovery board (new or old version).  These seem to be out of stock everywhere.  I haven't made up my mind yet on the audio transducers.  I prefer to design in something that can easily be replicated.

  Are you sure? yes | no

Simon Merrett wrote 10/29/2020 at 08:17 point

How about taking a chance with https://uk-m.banggood.com/STM32F407VET6-Development-Board-Cortex-M4-STM32-Small-System-ARM-Learning-Core-Module-p-1460490.html

Or you could look at using a slightly different model (F411 for example). I do think a general port to more readily available microcontrollers would be fantastic. I know esp32 would be on many people's list but I would prefer SAMD51. 

  Are you sure? yes | no

Simon Merrett wrote 10/29/2020 at 09:03 point

Doh, that's the wrong one, no? Aren't you after the stm32f405? 

  Are you sure? yes | no

Christoph Tack wrote 11/01/2020 at 18:33 point

Because I had the ESP32, I started implementing it on it.  After finding a bug in Codec2 and tripling ESP32's task memory I have an application now that takes an 40ms audio frame, encodes it (takes 10ms) and then decodes it (takes 24ms).  So real time use would be possible.  I still have to check if the decoded audio is ok.

  Are you sure? yes | no

Simon Merrett wrote 11/01/2020 at 18:59 point

Well done! May I ask what you had to change to make it work (specifically the bug)? 

  Are you sure? yes | no

Simon Merrett wrote 10/27/2020 at 21:32 point

Well found! The existing implementation is very interesting. The pdm mic filter is a handy addition. Will you try to recreate it yourself in your own hardware? 

  Are you sure? yes | no

Christoph Tack wrote 08/16/2020 at 11:13 point

Initially I'm experimenting on a Wandboard (iMX6Q) just because it happened to be in my cabinet.  I'm planning to use it on a Raspberry Pi Zero with python.  I might later use the (existing) implementation on a STM32F4, but I guess that will take a lot more effort.

  Are you sure? yes | no

Simon Merrett wrote 08/16/2020 at 14:54 point

I agree with you that it would be significant effort but illuminating to understand what the process looks like to get it into lower performance embedded systems. 

  Are you sure? yes | no

Simon Merrett wrote 08/11/2020 at 07:35 point

Codec2? I'm intrigued to see what processor you port this to it would be fantastic to have a way of using it on more embedded devices. Very excited to follow your project. Thanks for posting it. 

  Are you sure? yes | no

Similar Projects

Does this project spark your interest?

Become a member to follow this project and never miss any updates