Design a site like this with WordPress.com
Get started

muikkuRF

Day one with the hackRF One radio

Using hackRF One with Ubuntu 22.04.1 LTS

Installed hackRF

sudo apt-get -y install hackrf
  • Installed gqrx

    sudo apt-get install gqrx-sdr
    
    ran command "gqrx" in terminal, started DSP in gqrx and it worked. We were able to receive radio signals and listen to FM radio stations with hackRF One.
    
  • Installed gqrx

    sudo apt-get install gqrx-sdr
    
    ran command "gqrx" in terminal, started DSP in gqrx and it worked. We were able to receive radio signals and listen to FM radio stations with hackRF One.
    

AM frequencies.

Tried to listen AM frequencies with gqrx but did not manage to get any signals from gqrx. Figured out that HACKRF One antenna has 75-1000 Mhz range. Lots of AM frequencies that I found was below 75 Mhz. Have to try this on a different location and find some global AM stations to listen.

HackRF antenna

Next we will try to get some audio spectrum waterfall screenshots for the AI.

Some updates in regards to MuikkuRF

We have discussed the following goals / approaches (20-09-2022);

  • AM vs FM recognition;
  • Waterfall sweep; recognize stations, signals and collect samples;
  • Determine on approach for AI-training; RAW signals converted to array in numpy or image recognition in an algorithm like YOLO?
  • Logging functionality to show activity on selected band you wanted to listen to;
  • Web / GUI wrapper for usability / monitoring.

Update 27-09-2022

Currently there are two teams; one dedicated to collecting signals and figuring out how to make a good signal database. The other team is focused on merging the AI-algorithm with realtime recognition. AM vs FM recognition has for now been abandoned due to the lack of viable AM signals being found.

The HackRF was taken to the 7th floor of a rooftop in the Kännelmaki area. This provided good reception for a lot of FM frequencies, and also other interesting signals that will be investigated. Pictures of this will be added later.

One of the troubleshooting steps that was taken in using the HackRF on Linux is manually building the librf- and host-side packages from scratch to match the firmware currently on the HackRF. Apt-get install hackrf installs the 2017 ver. of these packages, whilst our HackRF is supplied with the 2021 fw. Downgrading it is possible, but instead I opted to upgrade the host side packages. Options like hack_rf sweep are now avaliable and can be an useful tool in automatically recognizing strong signals. qspectrumanalyzer, gqrx and other tools have been used so far. These will probably work better in a native linux installation because of the high data bandwidth that the hackrf transfers via USB to the VM, so next step is making a (dual-boot) environment and reconfigure the signal environment there.

SDR-image-classification

Made this image classification program with Python and Tensorflow. It can detect from WebSDR (http://websdr.ewi.utwente.nl:8901/) stream screenshot if there is a signal or not.

GitHub link to this repository: https://github.com/kajami/SDR-image-classification

Instruction for the program I got from following this tutorial: https://youtu.be/jztwpsIzEGc

I am running this program on jupyter lab with Windows 10 because I think jupyter lab visualizes data better than just running it as a normal python program on a command line.

You can run the program without jupyter lab by using this repository:
https://github.com/kajami/SDR-image-classification

Installed libraries are:
tensorflow 2.10.0
tensorflow-gpu 2.10.0
opencv-python 4.6.0.66
matplotlib 3.6.0

Dataset was very small 32 pictures with signal and 31 without signal.

Signal screenshots has been captured from AM frequencies.

Signal screenshots
Without signal screenshots
Test data

Program in action

Prediction accuracy.
Prediction from the picture. As we see there is no signal as the result says.

Next step would be to get this working in a way that it would recognize signals from the live stream, not just from the screenshots.

4-10-2022

Some significant steps have been made in signal analyzing from the hackrf.

Ubuntu 20.04 has been determined as the latest Ubuntu distribution compatible with the following;

– gqrx

– qspectrumanalyzer

– hackrf tooling (2021 FW)

2 days prior to writing this documentation a new version of the firmware was released. Because I did not want to risk flashing this to the hackrf, the 2021 FW and host tools will be used. It should not impact any key functionalities or results for our use-case (spectrumanalysis)

The hackrf_sweep backend has enough power to sweep the entire 0-6GHz spectrum atleast once, sometimes multiple times a second depending on the system. By default, the format is not neccesarily readable for humans- see the bottom screenshot:

The application ‘QspectrumAnalyzer’ can translate the output from this command and pipe it into a visual representation. This gives us the following result.

When we zoom in and highlight the powerful signals and map this to well known frequencies, this shows that the hackRF is functioning as expected and is receiving different kinds of signals. QspectrumAnalzyer differs from gqrx- it does not provide the functionality to listen to the scanned signals.

The green square indicates the range of the 800MHz band, on gqrx is zoomed into. After research, we can conclude that this signal is owned by DNA as part of their LTE network.

Spectrum analysis of ‘well-known’ registered radio stations at Traficom

5-10-2022

I wrote a small python code that loads the model and tries to detect if there is anything on the screen that the model can recognize

I loaded the model.

model = models.load_model('imageclassifier.h5')

Since the image classifier doesn’t have any position detection, i made it to tell what it’s detecting to the console and change the screen recording from grayscale to rbg whenever it’s doing any kind of detection.

However the detection accuracy was very weak and I wasn’t really sure if it’s really detecting the signal properly, since I’m not exactly sure where the detection is happening in the screen.

To make the detection properly working it is crucial to know where the detection is happening, so I have started looking into other types of models.

6-10-2022

Some updates regarding AM samples for the project.

We ordered a cheap Nooelec SDR bundle from Amazon and with it got a longer telescopic antenna and Now we are able to collect AM samples aswell.

  • Nooelec NESDR SMArt v4
  •  Antenna base w/2 m RG58 cable.
  • Telescopic antenna mast

We used this hardware setup with CubicSDR software and got it to work but finding any AM signals was still hard. Then we found this website talking about direct sampling mode: (https://www.rtl-sdr.com/rtl-sdr-direct-sampling-mode/) which with a small hardware mod allows the dongle to tune to the HF frequencies where ham radio and many other interesting signals are found.

Luckily with this SDR version this hardware mod was already done and didn’t have to start soldering. We could instantly use the direct sampling mode from CubicSDR.

With the Q-ADC sampling mode on we were able to pick up some AM signals and demodulate some audio from the AM frequency.

Also noticed if we touched the antenna while it was active the signals got stronger. This happens because your body becomes “part of the antenna” and makes it bigger. Doesn’t work on all frequencies but can be helpful if you are trying to listen to a weak signal.

AM signal

Now that we are able to pick up and demodulate AM signals too our plan is again collecting FM and AM signals for the project.

9-10-2022:

Update to signal detection model:

Since the last model didn’t have position detection, I decided to make a new model that has it. I decided to use yolov5 for this task.

I needed to do following tasks:

  • Gather a dataset of images
  • Label our dataset
  • Export our dataset to YOLOv5
  • Train YOLOv5 to recognize the objects in our dataset
  • Write a code for screen capturing

First I started gathering training data from websdr.org. They have online sdr’s with waterfall displays that can be used to collect training data.

Screenshot from waterfall display

I started taking screenshots of the waterfall display until I had around 50 images with each multiple radio signals in them. I assumed that would be enough for my test model.

Next I started labeling the images. I used annotation tools to mark where the signals are in the picture. I labeled around 220 signals from the pictures I had taken.

The yellow squares in the picture are manually drawn with an annotation tool.

After finishing the labeling I started training my model. Since my laptop doesn’t have a gpu I decided to use google colab to train my model.

Training the model

After finishing the training I looked at the analytics and it seemed like the training was a success. Sample test images were recognized with 80% accuracy, which is enough for now.

Analytics from TensorBoard

I downloaded the model.pt file and exported it to a cloned yolov5 GitHub repository (https://github.com/ultralytics/yolov5). I tested it out with the detect.py code that can do detection to pictures and videos. However I needed to make the model detect signals real-time so I wrote a small python code that can do live detection from a screen.

import torch
import pyautogui
import cv2
import numpy as np

model = torch.hub.load('yolov5', 'custom', path='best', source='local')


# simple loop over screenshots
while True:
    # Take a screenshot
    screen = pyautogui.screenshot()
    # convert to array
    screen_array = np.array(screen)
    # crop region
    crop = screen_array[100:400, 100:1200, :]
    color = cv2.cvtColor(crop, cv2.COLOR_RGB2BGR)

    # do detection
    results = model(color)

    # show live-detection
    cv2.imshow('SDR-signal', np.squeeze(results.render()))

    if cv2.waitKey(1) & 0xFF == ord('q'):
        break
cv2.destroyAllWindows()

After struggling a bit I got it to work and it seemed to be pretty accturate. It didn’t do many false detections and seemed to detect most signals from the waterfall display.





Update 10-10-2022:

Bit of a delayed update; but a bunch of FM samples have been added and analyzed from 2 different geographical locations within Finland (Helsinki ja Lahti). I tried to add both radio stations with mostly speaking (which can be easily identified with the 3 stripes indicating silence between sentences) and radio stations. The view/filtering was adjusted to look as close as possible to the previously uploaded samples.

A sample that shows intermittent silence; this corresponds with patterns commonly associated with speaking.
FM sample that shows almost no silence; this corresponds to a traditional (music) radio station.

As for the licensing of the database used within the Artemis 3 tool, the only sources found so far are:

https://www.sigidwiki.com/wiki/Signal_Identification_Wiki:General_disclaimer

Because this seems like a more general disclaimer then one referring to the contents of the wiki, an e-mail has been sent to the owners of the wiki whether we are allowed to use the resources within our project.

As additional information; the signals collected in Lahti were from a groundfloor building in the city centre. The signals collected in Helsinki were from a 7th floor roofterrace in the Kannelmäki district.

The following is on the current tasklist:

  • Check the license of the DB > Mail has been sent to contact info found on sigidwiki
  • Collect AM/FM signals > Posted to Github, evaluating quality
  • Updating all documentation
  • Look into automatically collecting signals (optional)

SDR-image-classification with new samples

New samples can be found here: https://github.com/kajami/SDR-project/tree/main/Signal%20samples

I am running this program on jupyter lab with Windows 10 because I think jupyter lab visualizes data better than just running it as a normal python program on a command line.

You can run the program without jupyter lab by using this repository:
https://github.com/kajami/SDR-image-classification

Installed libraries are:
tensorflow 2.10.0
tensorflow-gpu 2.10.0
opencv-python 4.6.0.66
matplotlib 3.6.0

Tried SDR-image-classification program with new dataset which contained AM and FM signals. Dataset was very small with 16 FM samples + 2 FM test samples and 20 AM samples with 2 FM test samples.

FM signals
AM samples

Program recognized test samples, but not other signals which were AM signals that I took from webSDR. So I think program recognize those AM and FM test samples because they are so different in color. I think I need to grayscale images and try again to confirm my theory that it recognizes signal images because of the color, not because of the shape of the signal.

recognized test sample

Automatic screenshot capture

Made this Python program (https://github.com/kajami/automatic-screenshot-capture) for our project to help taking screenshots that are needed for the AI training. Program was tested on Ubuntu 22.04, Windows 10 and Windows 11.

It is a command line program that asks how many screenshots you want and time between screenshots. Then it will take the screenshots and save those in the ./screenshots folder in the project root directory.

Taking screenshots
Saved screenshots

How to install & use program

Create virtual enviroment

Run “pip install -r requirements.txt”

Run “python3 screenshots.py”

Set how many screenshot you want

Set time between screenshots

Images are saved in the ./screenshots folder

If you want to change the position of the screenshots you have to change the bbox coordinates (bbox=(0, 450, 1585, 1035)). If you want the fullscreen screenshots use ImageGrab.grab() instead of bbox.

Update 20-10-2022: Calculating the frequency

I started writing a code to calculate the frequency of the signals I have detected. My idea was to calculate the signals position by using number detection and detect two frequency number positions, then calculate the distance between them and get frequency per pixel.

Then you can add the frequency per pixel to the detected frequency based on their distance. The code is still bit unstable so I am thinking of ways to make it more stable.

Using Yolo v5 detection I could detect the numbers, then using pyautogui I cropped the numbers to smaller images. After that I used pytesseract to detect the numbers in the image. After that I could caculate the approximate frequency for all the detected signals

Object detection detecting signals and the frequency numbers

Using the calculations I would get pretty accurate approximation where the signals are. since the detection is bit far away it’s not entirely accurate. I believe that 3924.9 is actually 3925.00 and 3910.06 is 3810.00, but it’s close enough to round the frequency up to the closest integer.

I added code to my previous screencapture.py code to do this.

I used python 3.8 and imported the following libraries: (on top of YOLOv5 requirement.txt)

numpy 1.23.3
opencv-python 4.6.0.66
pandas 1.5.0
PyAutoGUI 0.9.53
pytesseract 0.3.10

pred = results.pandas().xyxy[0]
    i = 0
    signal_middle = []
    num_middle = []
    text_abs = 0
    text = ''
    dist_coords = 0
    pytesseract.pytesseract.tesseract_cmd = 'C:\Program Files\Tesseract-OCR/tesseract.exe'
    for index, row in pred.iterrows():
        middle = (row['xmax'] + row['xmin']) / 2 + 100
        if row['name'] == "radio-signal":
            signal_middle.append([index, middle])

        if row['name'] == "numbers" and i < 2:
            i += 1
            xy = [100 + int(row['ymin']), 110 + int(row['ymax']), 100 + int(row['xmin']), 110 + int(row['xmax'])]
            crop_num = screen_array[xy[0]:xy[1], xy[2]:xy[3], :]
            text = pytesseract.image_to_string(crop_num)
            if len(text) == 5:
                text_abs = abs(text_abs - int(text))
                dist_coords = abs(dist_coords - middle)
                freq_step = dist_coords / text_abs
                num_middle.append([index, middle, int(text)])

            if len(text) == 5 and i == 2:
                print(text_abs, "text abs")
                print(dist_coords, "distance between number coordinates ")
                print(freq_step, "how many pixels is one frequency")

            if len(text) != 5:
                print("invalid detection!", len(text))

            print("frequency", text)
            input("press enter to continue")
            # cv2.imshow("cropped number", crop_num)

    freq_for_sig = ((signal_middle[0][1] - num_middle[0][1]) / freq_step) + num_middle[0][2]
    print(freq_for_sig,  "frequency for signal")
Trying the code

Update 24-10-2022: FM_frequency model from test samples

I created a new .pt file model by adding the test samples that we collected earlier. The confidence is high but the test data didn’t have too much variations so we might need to add little bit of different images and zoom rates.

Update 29-10-2022: Searching for waterfall

I updated the model with the lastest pitures, so technically it can now recognize between am and fm, but since were trying to find a python terminal controlled waterfall, we have to train a model for that purpose.

I don’t have a SDR with me so I couldn’t test it with a live sdr, but at least the test images worked pretty well, However the am and fm images that are labeled are quite different so I’m not sure if they work together perfectly.

When we have the final product we need a more consistent dataset where the fm and am signal can be detected from a same picture.

I didn’t have a physical SDR with me this week, so I decided to do some research on possible libraries and software that can use websdr servers for the signal.

SDR sharp

sdr shatp could use signals from various websdrs, the layout was quite simple and easy to use. However it’s a software and not so usable with python since it’s hard to get the data out of it. I used the airspy server to get the signal.

SDR console

Sdr console had more tools for recording a singnal and it also had muli signal recording option, but again it’s quite hard to get the data out without manual work, so even with multi signal recording option it’s not very suitable for this project

Other SDR software I tried

SDRangel and SirDigger with virtual machine

Similar to previous software, couldn’t find a way to get data out of the program easily. These can be used with websdrs, I couldn’t find one that uses websdrs with python.

I also tried Universal Radio Hacker and gqrx but they didn’t have websdr options so Icouldn’t use them without an physical sdr.

Github Libraries that could be used for the project:

https://github.com/MLAB-project/pysdr

https://github.com/dswiston/pyFmRadio

https://github.com/pyrtlsdr/pyrtlsdr

https://github.com/dmitryelj/SDR-Waterfall2Img

https://github.com/madengr/ham2mon

https://github.com/shajen/rtl-sdr-scanner

Image labeling for objection detection with LabelImg

We are currently doing image labeling for Yolo and we decided to use python app called LabelImg. Github page of LabelImg can be found here: https://github.com/heartexlabs/labelImg

I did this on Windows and the installation of the program is quite simple. You need to have python 3 installed and added to Path in Environment Variables

  1. First I loaded the zip-file from the github repository
  2. Then extract it
  3. Then install PyQt5 and lxml with commands in cmd:
    pip3 install pygt5
    pip3 install lxml
  4. Then run command
    pyrcc5 -o libs/resources.py resources.qrc
  5. and start the program with command
    python labelImg.py

More instructions on the installation process on the LabelImg github page.

Now you should have LabelImg open and running.

Now you can from the left side press open Dir and search a folder where your pictures are for labeling.

And when you have your image open what you want label you can click from the left Create RectBox and then just click and drag on the picture the area you want to label and then it gives you the box where you choose the labels for the object.

Also on the menu left you can switch the format in which want to annotate we are using YOLO in this project.

LabelImg will create classes.txt file and a txt file for each image you label. in the folder you have chosen for labeling.

Classes.txt file consists of the labels you are using for these images and txt file for the image has the coordinates of the boxes you made in the labelImg. Note there is same name txt file for each png.

Update 08-11-2022

SoapySDR + PothosFlow was installed to hopefully get us one step closer to directly modulating, recieving and processing the signal within Python. SoapySDR in combination with some code was succesful in recieving said ‘IQ’ data, which is unmodulated.

#Steps needed
#1. Tune SoapySDR to 101.103.000 (101.103 MHz)
sdr.setFrequency(SOAPY_SDR_RX,0,101103000)


#2 Setting up stream parameters (RX = Recieving antenna, recieve in CF32(Complex float 32 x2 == Complex64)
pasilaRX = sdr.setupStream(SOAPY_SDR_RX, SOAPY_SDR_CF32)


#create a re-usable buffer for rx samples (complex64 is 2 float 32's)
buff = numpy.array([0]*1024, numpy.complex64)

#activate the hackrf its antenna's
sdr.activateStream(pasilaRX)

#fill the array / complex64 with audiodata
for i in range(10000):
    sr = sdr.readStream(pasilaRX, [buff], len(buff))

If we try to read this data or pipe it directly to an .wav or .mp3 file we realize it is not playable. This is because of the IQ format commonly used in raw signal processing. A good writeup about what this format is can be found here:

https://www.pe0sat.vgnet.nl/sdr/iq-data-explained/

Unfortunately I have not been able to demodulate the data within Python, even with pre-made libraries. Within PothosFlow however, I have been able to create a flow that is able to:

  1. recieve a signal
  2. pipe the spectrogram to a view element
  3. process the signal through a freq demod (this is the part that we are not able to do in python yet)
  4. put the signal in an audio sink (in this case it will cause playback on the device running the PothosFlow

The full diagram looks like the following:

As a final attempt to recieve the signal within Python and be able to do things with it like generating a spectrogram or making a tuner, I looked into a library called RX_tools. This tool seems promising, however I am still having trouble getting it to work with the hackrf. In theory it should be able to recieve signals and do the demodulation for us. It is a command line tool however.

Despite IQ data not being that readable for us humans, I did generate some nice graphs with it that show that the hackrf is recieving a proper signal with SoapySDR atleast. Just not a demodulated one.

PLSDR

PLSDR is a python based SDR program that utilizes gnuradio code blocks for its inner workings. We tested this SDR program to see if it could be utilized in our SDR-Project because it was coded in python.

project homepage and github below:

https://arachnoid.com/PLSDR/

https://github.com/lutusp/PLSDR

Installation instructions for newest version on the homepage:
In my installation phase ran into some errors that have been reported on the github issues page.

Installation on Windows:

  • PLSDR 2.0
  • Gnuradio 3.8.2.0
  • python 3.10.8

First issue that i ran into was when trying to run launch_PLSDR.bat file from the scripts folder. The hardcoded paths to Gnuradio folder were wrong, because Gnuradio folder structure had changed due to new updates and the PLSDR project is a bit dated.

I edited the launch_PLSDR.bat file and replaced gr-python27 in the paths with tools\python3. issue reported here on github:
https://github.com/lutusp/PLSDR/issues/13

Second error i ran into was the program after launching didn’t recognize my RTL-SDR dongle and i got the following on terminal.

someone else have had this problem too on github:

https://github.com/lutusp/PLSDR/issues/14

This problem was solved by commenting out the other possible devices from PLSDR.py

After this program launched from launch_PLSDR.bat

Got very good reception from finnish FM-radio stations, but unfortunately it seems the program can’t record samples without some modification.

Saving a .wav-file from GNU Radio block

Saving a .wav file was actually pretty straight forward. GNU radio offered block named “Wav File Sink” which was able to save the .wav file from audio input to your hard drive. I used my microphone as a audio source input because I did not have the SDR with me. Remember to put an accurate filepath otherwise it wont work.

You can find gnu files ( saveWavFile.grc and top_block.py ) here: https://github.com/kajami/SDR-project/tree/main/cool_code/gnuradio

Graph how to save .wav- file with gnu radio

I also made my own wav-recorder with Python. You can find it here: https://github.com/kajami/SDR-project/tree/main/wav-recorder

I/Q-file player

With GNU radio I managed to play and record IQ-file while drawing the waterfall from the signal. IQ-file was in .wav format as is the recording.
Recorded .wav-file is not an IQ-file. It is sound recorded from the IQ-file signal that is played through the GNU-radio.

This workflow is based on this YouTube video: https://www.youtube.com/watch?v=DvrqljWWgrw added by TheGmr140

I added the Wav File Sink and the QT GUI Waterfall Sink. So that it will show the waterfall from the signal and it can save .wav-files

Update 28-11-2022

After tons of trial and error I was finally able to demodulate multiple .wav audio files from a single .wav IQ file using python terminal. The code works in similar fashion to GNUradio but it doesn’t require any GUI to be used.

I started working with tested IQ .wav signal that was 48kHz wide and had multiple different AM signals in it. Looking at the file with audacity I could see information about the file. It’s indeed 40kHz and it’s written in 32-bit float format.

Upon listening to the audio, it makes a high pitched static noise without much variance.

I started working with my previous code that attempted to demodulate am signal but was not working like intended. I am currently using Using Python 3.8.

First I called the following file ‘AM_Sig_IQ_48khz.wav’

#1.9.3
import scipy.io.wavfile as wavfile
from scipy.signal import firwin
#1.23.4
import numpy as np
#3.6.2
from matplotlib import pyplot as plt
import matplotlib
import datetime

# read in wav format IQ data
rate, data = wavfile.read('AM_Sig_IQ_48khz.wav')

I wanted to figure out how mathplotlib plots the spectrogram. I’m using a function called “specgram” that automatically shows the file as a spectrogram after being dismantled into separate I and Q

To get a working spectrogram plot you need a complex number array and and a sample_rate. Sample rate you can get straight from the .wav file but since it’s a float 32 we need to turn it a complex number first:

taking the Float 32 data and transposing the matrix which can be conveniently separated in I and Q parts by sequence unpacking.

I, Q = data.T

After we have separate I and Q arrays of the transposed float 32 data, we can turn it into complex number by multiplying Q with an imaginary number and pairing it with a real number I:

audio = I + 1j*Q

in python “j” makes a number imaginary.

Here’s how the each datatype looks like:

transposing puts all the first numbers in one array (I) and and last ones into one array (Q). Basically turning array of small arrays into 2 long arrays.

after that just add the arrays together after multiplying the other with an imaginary number. It pairs a real number with an imaginary number and turns it into a complex number.

After that I could add the data to specgram function along with sample_rate to get a visual spectrogram of the file:

I made a formatSpectrogram function to format the GUI view into more clear look. In it the spectrogram is flipped on it’s side since plt.specgram has it sideways by default and it would take some extra effort to flip it.

formatSpectrogram()
plt.specgram(audio_shift, Fs=rate)
plt.show()

Here’s how our complex number looks in a spectrogram:

You can see there is 2 different am signals which are -7kHz and 10kHz away from the center (0). They are bit blocky since it’s not a recoding but 2 audio files manually turned into a IQ file for testing purposes.

It’s possible to use an actual recording as well but it would make testing bit more difficult. Here’s example of such recording:

In the previous picture we saw 2 am signals but we need to change the frequency so that the signal is in the center position so that we can actually got our hands into them.

To move the center 10kHz down we have to multiply our complex number with a following cosine function:

audio_shift = audio * np.exp(1j*2*np.pi*shift_amount/rate*np.arange(len(audio)))

Basically were multiplying the complex number with another complex number with a frequency of 10kHz and sample rate of 48k.

Using GNU radio graphs to visualize what we are doing. We turned our float to complex and now we are multiplying with another signal source like in the picture. So far our code does the exact same thing as this GNU block combination.

Now that we have done the math we can check how our spectrogram looks:

It seems that we have succesfully moved the frequency to the center and we can continue by decimating the signal. Current frequency is 48kHz and we want 12kHz so we will first decimate it by 4:

# Decimate
x = audio_shift[::4]

since we decimated or downsampled the signal we also have to throttle the sample rate too. We’re basically compressing the file to smaller size. Were also changing the data format from complex to magnitude here as the am signal is 48kHz / 4 = 12kHz:

# Throttle
sample_rate = rate / 4

We have to change the sample rate along with the decimation, otherwise the speed of the audio will change.

Next we want to control the gain of the audio. At the moment the gain is all over the place. we want the gain to maintain a constant output signal level, regardless of the received signal strength. After that we want the maintained signal to be at a listenable volume:

# normalize volume
x /= x.std()

# lower the db
x = x / db

Now we could already listen to the audio, but there is still a loud high piched sound that’s making the listening very difficult. Solution for that is a low-pass filter.

As the name says it passes all frequencies lower than the set threshold and cuts off the rest. The threshold can be at most half of the total frequency, so in our case 12kHz / 2 = 6kHz. I put the cutoff point to 4.2e3 or 8400Hz

# Low-Pass Filter
taps = firwin(numtaps=101, cutoff=4.2e3, fs=sample_rate)
x = np.convolve(x, taps, 'valid')

Lastly I added a code for writing the audio file into .wav file

added name, sample rate for the audio and magnitude data that I add extra dimension with newaxis.

wavfile.write(name, int(sample_rate), (x.imag[:,np.newaxis]))

now we have a listenable audio from the IQ file. We can also easily get out the other frequency as well just by changing the shift_amount in the fuction.

To make it more simple I made it a simple fuction so it’s easy to use in the future too.

and now we have done all the steps in this GNUradio graph only using python. It can be easily replicated to other AM signals and probably with other IQ files as well now that I undertand the file types bit better and have much better grasp how am demodulation works.