Content from introduction


Last updated on 2025-04-14 | Edit this page

Overview

Questions

  • How do you write a lesson using Markdown and sandpaper?

Objectives

  • Explain how to use markdown with The Carpentries Workbench
  • Demonstrate how to include pieces of code, figures, and nested challenge blocks

Introduction


This is a lesson created via The Carpentries Workbench. It is written in Pandoc-flavored Markdown for static files and R Markdown for dynamic files that can render code into output. Please refer to the Introduction to The Carpentries Workbench for full documentation.

What you need to know is that there are three sections required for a valid Carpentries lesson:

  1. questions are displayed at the beginning of the episode to prime the learner for the content.
  2. objectives are the learning objectives for an episode displayed with the questions.
  3. keypoints are displayed at the end of the episode to reinforce the objectives.
Challenge

Challenge 1: Can you do it?

What is the output of this command?

R

paste("This", "new", "lesson", "looks", "good")

OUTPUT

[1] "This new lesson looks good"
Challenge

Challenge 2: how do you nest solutions within challenge blocks?

You can add a line with at least three colons and a solution tag.

Figures


You can use standard markdown for static figures with the following syntax:

![optional caption that appears below the figure](figure url){alt='alt text for accessibility purposes'}

Blue Carpentries hex person logo with no text.
You belong in The Carpentries!
Callout

Callout sections can highlight information.

They are sometimes used to emphasise particularly important points but are also used in some lessons to present “asides”: content that is not central to the narrative of the lesson, e.g. by providing the answer to a commonly-asked question.

Math


One of our episodes contains \(\LaTeX\) equations when describing how to create dynamic reports with {knitr}, so we now use mathjax to describe this:

$\alpha = \dfrac{1}{(1 - \beta)^2}$ becomes: \(\alpha = \dfrac{1}{(1 - \beta)^2}\)

Cool, right?

Key Points
  • Use .md files for episodes when you want static content
  • Use .Rmd files for episodes when you need to generate output
  • Run sandpaper::check_lesson() to identify any issues with your lesson
  • Run sandpaper::build_lesson() to preview your lesson locally

Content from Intro and Theory Refresher


Last updated on 2025-09-07 | Edit this page

Overview

Questions

  • What are the core components of a GPR system?

  • How does GPR work to detect subsurface changes?

  • What does a GPR trace represent?

  • How can we process and visualize SEG-Y GPR data using Python?

  • What are the core hardware and data-logging components of a GPR system, and how do they interact during acquisition?

  • Through which physical contrasts does GPR detect subsurface changes, and how do antenna frequency and bandwidth control resolution and depth?

  • What does a single GPR trace represent in time and in depth, and how is a radargram constructed from adjacent traces?

  • How can we read, inspect metadata, and visualize SEG-Y GPR data in Python using ObsPy without modifying the raw samples?

Objectives

  • Understand the structure and functionality of a GPR system.

  • Explain the concept of GPR signal reflection and trace acquisition.

  • Interpret what a single GPR trace and a 2D radargram represent.

  • Demonstrate loading and visualizing GPR SEG-Y data using Python and obspy.

  • Identify the console, antenna, and encoder roles in a modern GPR system and relate them to sampling, time window, and spatial sampling.

  • Explain reflection generation from dielectric permittivity contrasts and the implications for amplitude, polarity, and travel time.

  • Interpret a trace as a one-dimensional time series and a radargram as a two-dimensional section built by lateral stacking of traces.

  • Demonstrate loading SEG-Y, inspecting basic headers, and plotting trace-level and section-level views using ObsPy and matplotlib.

Key Points
  • GPR uses EM waves to detect subsurface features.
  • A trace is a time series of reflections from a point.
  • SEG-Y files can be visualized using ObsPy.
  • GPR emits broadband electromagnetic pulses; reflections arise at contrasts in relative permittivity, conductivity, or magnetic permeability.
  • A GPR trace is amplitude versus two-way travel time at a single surface position; a radargram is a collection of traces along a profile.
  • SEG-Y is a common container for GPR; ObsPy can read variable-length traces and expose headers needed for plotting and basic QC.
  • Antenna frequency controls depth–resolution trade-off: lower frequency penetrates deeper with coarser resolution; higher frequency resolves finer targets at shallower depths.

What is geophysics?

In the broadest sense, the science of geophysics is the application of physics to investigations of the Earth, Moon and planets. The subject is thus related to astronomy. To avoid confusion, the use of physics to study the interior of the Earth, from land surface to the inner core, is known as solid earth geophysics. Applied geophysics’ covers everything from experiments to determine the thickness of the crust (which is important in hydrocarbon exploration) to studies of shallow structures for engineering site investigations, exploring for groundwater and for minerals and other economic resources, to trying to locate narrow mine shafts or other forms of buried cavities, or the mapping of archaeological remains, or locating buried pipes and cables – but where in general the total depth of investigation is usually less than 100 m.

Ground penetrating radar (GPR) is now a well-accepted geophysical technique. The method uses radio waves to probe “the ground”. In its earliest inception, GPR was primarily applied to natural geologic materials. Now GPR is equally well applied to a host of other media such as wood, concrete, and asphalt. The most common form of GPR measurements deploys a transmitter and a receiver in a fixed geometry, which are moved over the surface to detect reflections from subsurface features. Ground penetrating radar systems are conceptually simple; the objective is to measure field amplitude versus time after excitation. The heart of a GPR system is the timing unit, which controls the generation and detection of signals. Most GPRs operate in the time domain. GPRs are normally characterized by their center frequency. The common range varies between 50MHz to 2.5 GHz.

Most GPR consist of three components: a console, an antenna, and an encoder. The first two are mandatory. The third is typical for 99 percent of GPR users that work in the utility locating, civil engineering, concrete scanning, and archaeology industries. The console is the brains of the system. This data logger communicates with both the encoder and the antenna to initiate a signal and record the responses. The antenna is where the GPR signal is produced. The encoder, also referred to as a survey wheel, controls for locational accuracy of the GPR data. As the wheel turns a fraction of a turn, the console will initialize the antenna. Most GPR antenna actually consist of a transmitter and a receiver, and once it is initialized by the console, the transmitter will produce the signal. The receiver will record responses for a defined amount of time and once that “time window” has reached its maximum time, the console will terminate the receiver from recording any more data.

The GPR antenna produces an electromagnetic pulse at the ground surface. This pulse, which is in the form of a wave, travels into the subsurface and will continue until one of two things happen: the signal strength totally decays, or the wave encounters a change in the physical properties of the subsurface material. This change can be a small target such as a buried pipe or a large geological layer such a bedrock. When the wave encounters a change in material properties, some of the wave’s energy will reflect back to the ground surface and some will transmit further into the ground if any energy is left. This produces a one-dimensional view just below the antenna. If multiple of these one-dimensional views are collected next to each other (i.e. the GPR is pushed along the ground surface in a line), then a twodimensional view of the subsurface will be generated by the GPR. This two-dimensional view is equivalent to looking at the wall of an excavation trench. It is a vertical profile of the subsurface directly beneath the path that the GPR was pushed.

Diagram of a GPR system showing console, antenna, and encoder.
Schematic of a GPR setup

In this episode we will read and visualize a GPR trace from SEG-Y data using Python and the ObsPy library. The aim is rapid inspection for quality control and for building intuition on trace and section appearance prior to any processing.

Example data context

Assume LINE01.sgy is a single profile recorded with an air-coupled or ground-coupled system. The file may contain variable-length traces. The sampling interval, number of samples, and textual or binary headers carry acquisition metadata.

Example 1: Plot a single trace

Example 2: Build a radargram

Example 3: Inspect metadata

Example 4: Add a time axis

Example 5: Apply a gain

Example 1 — Plot a single trace for a quick look

PYTHON

from obspy.io.segy.segy import _read_segy
import matplotlib.pyplot as plt

# Load SEG-Y using ObsPy (handles variable-length traces)
stream = _read_segy("LINE01.sgy", headonly=False)

# Access trace data
trace0 = stream.traces[0].data
n_traces = len(stream.traces)

# Plot first trace
plt.figure(figsize=(6, 4))
plt.plot(trace0)
plt.title("First Trace from LINE01.sgy")
plt.xlabel("Sample Index")
plt.ylabel("Amplitude")
plt.grid(True)
plt.show()

What this code does, step by step:

Import readers and plotting: brings in ObsPy’s private SEG-Y reader and matplotlib.

Read the SEG-Y stream: _read_segy parses the file into a SEGYFile-like object with a .traces list. headonly=False ensures sample arrays are read, not just headers.

Select the first trace: stream.traces[0].data returns a one-dimensional array of amplitudes for the first position along the line.

Count traces: len(stream.traces) gives the number of positions acquired.

Plot the raw samples: the x-axis is sample index, not time, because a sample interval has not been applied; the y-axis is recorded amplitude in instrument units.

Interpretation note: a clear early-time direct wave or ring-down may dominate the first part of the trace; deeper reflections appear later in the sample index.

Add-on checks on performance:

Confirm whether amplitudes are signed and whether clipping is visible.

convert sample index to time using the trace header sample interval.

Look for saturation or DC drift that might require dewow in further processing.

Example 2 — Build a simple radargram image from all traces

PYTHON

import matplotlib.pyplot as plt
from obspy.io.segy.segy import _read_segy
import numpy as np

# Load SEG-Y file
segy_file = "LINE01.sgy"
stream = _read_segy(segy_file, headonly=False)

# Number of traces and max trace length
n_traces = len(stream.traces)
max_len = max(len(tr.data) for tr in stream.traces)

# Fill 2D array with seismic data
data = np.zeros((max_len, n_traces))
for i, tr in enumerate(stream.traces):
    data[:len(tr.data), i] = tr.data

# Plot
plt.figure(figsize=(12, 6))
plt.imshow(data, cmap="gray", aspect="auto", origin="upper", interpolation="none")
plt.title("Seismic Section (LINE01.sgy)")
plt.xlabel("Trace Number")
plt.ylabel("Time Sample")
plt.colorbar(label="Amplitude")
plt.show()

What this code does, step by step:

Read the file and collect traces: same reader as Example 1.

Determine array geometry: n_traces is the lateral dimension; max_len is the longest trace length.

Allocate a section matrix: data is a two-dimensional array with shape (time samples, trace number).

Populate the matrix: for each trace, copy its samples into the corresponding column; shorter traces remain zero-padded at late times.

Display the radargram: imshow renders amplitude as image intensity.

origin=“upper” places early time at the top of the image.

aspect=“auto” allows traces to fill the figure width.

cmap=“gray” uses a neutral colour map suited to signed amplitudes.

interpolation=“none” avoids smoothing that could hide thin reflectors.

Interpretation note: continuous subhorizontal events suggest layering; symmetric hyperbolas indicate compact objects. Polarity is arbitrary at this stage and depends on plotting conventions.

Add-on checks on performace:

Verify whether zero padding at late times creates a visible edge; in later processing you may want to trim to a common time window.

If position ticks are uneven, later steps should remap traces to true distance before velocity analysis.

To relate time samples to actual time, multiply sample index by the per-trace sample interval from headers.

Example 3 — Inspect acquisition metadata

SEG-Y files store much more than amplitudes. Each trace contains a header with information such as the number of samples, sample interval, and trace sequence number. Reading these values is a first quality-control step.

from obspy.io.segy.segy import _read_segy

stream = _read_segy("LINE01.sgy", headonly=True)

# Access the first trace header
tr0 = stream.traces[0]

print("Number of samples:", tr0.header.number_of_samples)
print("Sample interval (microseconds):", tr0.header.sample_interval_in_ms_for_this_trace)
print("Trace sequence number:", tr0.header.trace_sequence_number_within_line)

What this code does, step by step:

headonly=True avoids loading amplitudes and reads only headers.

number_of_samples × sample_interval gives the total time window.

Trace sequence numbers are useful for checking completeness of the dataset.

Inspecting headers ensures that later visualizations are correctly scaled.

Example 4 — Adding a time axis to a trace

Plots by sample index are useful, but a time axis makes the physics clearer. We can build a time vector from the sample interval stored in the header.

import matplotlib.pyplot as plt
from obspy.io.segy.segy import _read_segy
import numpy as np

stream = _read_segy("LINE01.sgy", headonly=False)
tr0 = stream.traces[0]

# Extract sample interval (microseconds) and build time axis
dt = tr0.header.sample_interval_in_ms_for_this_trace  # in microseconds
n_samples = len(tr0.data)
time = np.arange(n_samples) * dt * 1e-6  # convert to seconds

# Plot trace with time axis
plt.figure(figsize=(6, 4))
plt.plot(time, tr0.data)
plt.title("First Trace with Time Axis")
plt.xlabel("Two-way travel time [s]")
plt.ylabel("Amplitude")
plt.grid(True)
plt.show()

What this code does, step by step:

The x-axis now shows two-way travel time in seconds.

This connects the data to subsurface depth, once a velocity model is assumed.

Early samples correspond to shallow reflections; later samples correspond to deeper features.

Example 5 — Applying a simple gain

Near-surface reflections are often strong, while deeper reflections appear weak. A gain function can emphasize later arrivals. This example applies a simple exponential gain to one trace.

import numpy as np
import matplotlib.pyplot as plt
from obspy.io.segy.segy import _read_segy

stream = _read_segy("LINE01.sgy", headonly=False)
tr0 = stream.traces[0]

# Build sample index array
n_samples = len(tr0.data)
time = np.arange(n_samples)

# Apply exponential gain
gain = np.exp(time / (0.2 * n_samples))  # adjust denominator to control strength
trace_gain = tr0.data * gain

# Compare original and gained traces
plt.figure(figsize=(6, 4))
plt.plot(time, tr0.data, label="Original")
plt.plot(time, trace_gain, label="With exponential gain", alpha=0.7)
plt.title("Trace with and without Gain")
plt.xlabel("Sample Index")
plt.ylabel("Amplitude")
plt.legend()
plt.grid(True)
plt.show()

What this code does, step by step:

The exponential gain increases amplitudes at later times.

This helps reveal weak deeper events that might otherwise be invisible.

The trade-off is that noise at late times is also amplified.

Students learn the importance of balancing clarity and noise.

Content from AGC


Last updated on 2025-09-07 | Edit this page

Overview

Questions

  • Why do GPR signals decrease in amplitude with increasing two-way travel time?

  • What is the principle of Automatic Gain Control (AGC) in signal processing?

  • How does the choice of window length influence the effect of AGC on a radargram?

  • How can AGC be implemented in Python to enhance deeper reflections in SEG-Y data?

  • What are the potential benefits and drawbacks of applying AGC to GPR data?

Objectives

  • Explain the physical reason for signal decay in GPR traces and why amplitude balancing is needed.

  • Describe the concept of AGC as a sliding-window normalization method.

  • Demonstrate the step-by-step implementation of AGC on a 2D data section using Python.

  • Evaluate how window length selection affects signal visibility and noise amplification.

  • Compare radargrams before and after AGC to interpret the improvement in deeper reflections.

Key Points
  • GPR amplitudes decay with travel time due to geometric spreading, absorption, and scattering.

  • Automatic Gain Control (AGC) rescales amplitudes within a moving window to balance early and late signals.

  • Window length controls the behavior: short windows emphasize local contrasts but may boost noise, long windows smooth variations but may underrepresent weak reflections.

  • AGC does not recover true amplitudes; it enhances relative visibility for interpretation.

  • In Python, AGC can be implemented by normalizing each trace sample by the RMS of its surrounding window.

Automatic Gain Control (AGC) in GPR

One of the main challenges in Ground Penetrating Radar (GPR) interpretation is that recorded amplitudes decrease with increasing two-way travel time. This decay occurs because of several factors: geometric spreading of the wavefront, intrinsic absorption by the medium, and scattering losses. As a result, reflections from deeper structures often appear weak compared to strong early arrivals near the surface.

Automatic Gain Control (AGC) is a signal-processing technique designed to correct this imbalance. Instead of applying a single global gain function, AGC adjusts amplitudes locally using a sliding window. Within each window, amplitudes are normalized relative to their average energy (root mean square, RMS). This ensures that both shallow and deep events are displayed with more balanced strength.

The effect of AGC depends strongly on the window length:

Short windows enhance small-scale features but may also amplify random noise.

Long windows provide smoother balancing but may underemphasize weak events.

It is important to remember that AGC modifies amplitudes for visualization and interpretation rather than preserving true reflection strength. In this lesson, we will implement AGC in Python, apply it to a SEG-Y radargram, and compare sections before and after processing to see how deeper reflections become clearer.

Example 1 — Applying Automatic Gain Control (AGC) to a radargram

apply_agc normalizes each sample by the RMS amplitude of its local window. The window length determines how local the gain is. After applying AGC, deeper reflections become more visible, although noise may also be amplified. The plotted radargram now shows both shallow and deep events with balanced amplitudes for interpretation.

PYTHON

import numpy as np
import matplotlib.pyplot as plt
from obspy.io.segy.segy import _read_segy

def apply_agc(data, window_len):
    """
    Apply automatic gain control (AGC) on a 2D seismic array.

    Parameters:
        data (ndarray): [samples x traces]
        window_len (int): Window length in samples

    Returns:
        ndarray: AGC-applied seismic section
    """
    agc_data = np.zeros_like(data)
    half_window = window_len // 2

    for i in range(data.shape[1]):  # loop over traces
        trace = data[:, i]
        agc_trace = np.zeros_like(trace)

        for j in range(len(trace)):
            start = max(j - half_window, 0)
            end = min(j + half_window, len(trace))
            window = trace[start:end]
            rms = np.sqrt(np.mean(window ** 2)) + 1e-12  # avoid divide by zero
            agc_trace[j] = trace[j] / rms

        agc_data[:, i] = agc_trace

    return agc_data

# --- Load SEG-Y ---
segy_file = "LINE01.sgy"
stream = _read_segy(segy_file, headonly=False)

# --- Build data array ---
n_traces = len(stream.traces)
max_len = max(len(tr.data) for tr in stream.traces)
data = np.zeros((max_len, n_traces))
for i, tr in enumerate(stream.traces):
    data[:len(tr.data), i] = tr.data

# --- Apply AGC ---
window_length = 50  # samples (adjust based on data)
agc_result = apply_agc(data, window_length)

# --- Plot ---
plt.figure(figsize=(12, 6))
plt.imshow(agc_result, cmap="gray", aspect="auto", origin="upper")
plt.title("Seismic Section with AGC (window = {} samples)".format(window_length))
plt.xlabel("Trace Number")
plt.ylabel("Time Sample")
plt.colorbar(label="Amplitude (AGC-normalized)")
plt.show()

Example 2 — Compare radargram before and after AGC

This example loads the SEG Y profile, builds the section matrix, applies AGC with a chosen window, and plots the original and the AGC result in separate figures for direct visual comparison.

import numpy as np
import matplotlib.pyplot as plt
from obspy.io.segy.segy import _read_segy

def apply_agc(data, window_len):
    n_samples, n_traces = data.shape
    agc_data = np.zeros_like(data)
    half_window = window_len // 2
    for i in range(n_traces):
        for j in range(n_samples):
            start = max(0, j - half_window)
            end = min(n_samples, j + half_window)
            rms = np.sqrt(np.mean(data[start:end, i]**2)) + 1e-12
            agc_data[j, i] = data[j, i] / rms
    return agc_data

# Load and assemble section
stream = _read_segy("LINE01.sgy", headonly=False)
n_traces = len(stream.traces)
max_len = max(len(tr.data) for tr in stream.traces)

data = np.zeros((max_len, n_traces))
for i, tr in enumerate(stream.traces):
    data[:len(tr.data), i] = tr.data

# Apply AGC
window_length = 50  # samples
agc_data = apply_agc(data, window_length)

# Plot original section
plt.figure(figsize=(12, 6))
plt.imshow(data, cmap="gray", aspect="auto", origin="upper", interpolation="none")
plt.title("Original Section (no gain)")
plt.xlabel("Trace Number")
plt.ylabel("Time Sample")
plt.colorbar(label="Amplitude")
plt.show()

# Plot AGC section
plt.figure(figsize=(12, 6))
plt.imshow(agc_data, cmap="gray", aspect="auto", origin="upper", interpolation="none")
plt.title(f"AGC Applied Section (Window = {window_length} samples)")
plt.xlabel("Trace Number")
plt.ylabel("Time Sample")
plt.colorbar(label="Normalized Amplitude")
plt.show()

Explanation:

The first figure shows the unscaled radargram where early strong arrivals can mask deeper reflections.

The second figure shows the AGC result where deeper events are more balanced against near-surface energy.

Use the same color map and plotting parameters so differences are due to AGC only.

Example 3 — Window length sensitivity study

AGC behavior depends on the window length. This example applies AGC with several window lengths and shows separate figures so students can judge the trade-offs.

import numpy as np
import matplotlib.pyplot as plt
from obspy.io.segy.segy import _read_segy

def apply_agc(data, window_len):
    n_samples, n_traces = data.shape
    agc_data = np.zeros_like(data)
    half_window = window_len // 2
    for i in range(n_traces):
        for j in range(n_samples):
            start = max(0, j - half_window)
            end = min(n_samples, j + half_window)
            rms = np.sqrt(np.mean(data[start:end, i]**2)) + 1e-12
            agc_data[j, i] = data[j, i] / rms
    return agc_data

# Load and assemble section (once)
stream = _read_segy("LINE01.sgy", headonly=False)
n_traces = len(stream.traces)
max_len = max(len(tr.data) for tr in stream.traces)

data = np.zeros((max_len, n_traces))
for i, tr in enumerate(stream.traces):
    data[:len(tr.data), i] = tr.data

# Try several window lengths (in samples)
window_list = [20, 50, 100, 200]

for w in window_list:
    agc_w = apply_agc(data, w)

    plt.figure(figsize=(12, 6))
    plt.imshow(agc_w, cmap="gray", aspect="auto", origin="upper", interpolation="none")
    plt.title(f"AGC Section — Window = {w} samples")
    plt.xlabel("Trace Number")
    plt.ylabel("Time Sample")
    plt.colorbar(label="Normalized Amplitude")
    plt.show()

Explanation and guidance:

Short windows, for example 20 samples, strongly equalize local amplitudes. Small features pop out, but late-time noise can be amplified.

Intermediate windows around 50 to 100 samples often give a good balance for many GPR datasets.

Long windows such as 200 samples produce smoother results that preserve broader trends but may underemphasize weak reflectors.

A practical rule of thumb is to select a window corresponding to a few to several periods of the dominant frequency:

\[ \text{window_len} \approx k \times \frac{1}{f_c\,\Delta t} \]

where f_c is the antenna center frequency and Δt is the sample interval; choose k between roughly three and ten.

Content from Background Removal


Last updated on 2025-09-07 | Edit this page

Overview

Questions

  • What is background removal in GPR data processing?

  • Why is background removal needed for clearer interpretation?

  • How can background removal be implemented programmatically?

Objectives

  • Define the concept of background in GPR sections and explain its origin.

  • Demonstrate how to remove background by subtracting the average trace.

  • Evaluate the effect of background removal on data clarity and reflector visibility.

Key Points
  • Background in GPR sections often comes from consistent system responses, coupling effects, or horizontal banding.

  • Removing the background enhances true reflections and reduces horizontal noise.

  • A simple method is subtracting the mean trace across all traces.

Ground Penetrating Radar (GPR) data often contain background noise that is unrelated to true subsurface reflections.
This background usually appears as horizontal banding or low-frequency components repeated across all traces.
It originates from several sources:
- the direct wave and antenna ringing,
- system electronics,
- consistent ground coupling responses.

If left untreated, this background can mask weak reflections, especially deeper ones.

Background removal is a common preprocessing step that improves the visibility of hyperbolas and stratigraphic features.
The simplest approach is to compute the average trace across all traces in a section and subtract it from each individual trace.
This removes energy that is consistent everywhere (the “background”) while preserving local variations that indicate subsurface features.

Example 1 — Removing background by subtracting the mean trace

PYTHON

# Background removal by subtracting mean trace
def remove_background(data):
    background = np.mean(data, axis=1, keepdims=True)
    return data - background

# Apply it on your GPR section
data_bg_removed = remove_background(agc_result)

# Plot
plt.figure(figsize=(12, 6))
plt.imshow(data_bg_removed, cmap="gray", aspect="auto", origin="upper")
plt.title("GPR Section with Background Removed")
plt.xlabel("Trace Number")
plt.ylabel("Time Sample")
plt.colorbar(label="Amplitude")
plt.show()

Explanation:

Define the function

np.mean(data, axis=1, keepdims=True) computes the average amplitude across all traces, at each time sample.

The result is a “mean trace” representing the background that is common to all traces.

keepdims=True preserves the two-dimensional shape so subtraction works cleanly.

Subtract the background

data - background removes this mean trace from each column (trace) in the data section.

What remains are relative deviations, i.e. actual subsurface reflections.

Apply to processed section

In this case, agc_result is the section after Automatic Gain Control.

Applying background removal after AGC makes weaker reflectors clearer.

Plot

imshow shows the new radargram with background removed.

Horizontal banding should be reduced, and hyperbolic events should stand out.

Example 2 — Visual comparison: before vs after background removal

This example shows the original section and the background-removed section, so learners can judge the effect directly.

import numpy as np
import matplotlib.pyplot as plt

def remove_background(data):
    # Estimate background as the mean across traces at each time sample
    background = np.mean(data, axis=1, keepdims=True)
    # Subtract the background from every trace
    return data - background, background

# Assume `agc_result` is your AGC-processed section with shape [samples, traces]
data_bg_removed, estimated_background = remove_background(agc_result)

# Plot original section
plt.figure(figsize=(12, 5))
plt.imshow(agc_result, cmap="gray", aspect="auto", origin="upper", interpolation="none")
plt.title("Original Section (after AGC, before background removal)")
plt.xlabel("Trace Number")
plt.ylabel("Time Sample")
plt.colorbar(label="Amplitude")
plt.show()

# Plot background-removed section
plt.figure(figsize=(12, 5))
plt.imshow(data_bg_removed, cmap="gray", aspect="auto", origin="upper", interpolation="none")
plt.title("Section after Background Removal (mean trace subtraction)")
plt.xlabel("Trace Number")
plt.ylabel("Time Sample")
plt.colorbar(label="Amplitude")
plt.show()

# Optional: visualize the estimated background (as an image)
plt.figure(figsize=(12, 5))
plt.imshow(np.repeat(estimated_background, agc_result.shape[1], axis=1),
           cmap="gray", aspect="auto", origin="upper", interpolation="none")
plt.title("Estimated Background (mean across traces, repeated laterally)")
plt.xlabel("Trace Number")
plt.ylabel("Time Sample")
plt.colorbar(label="Amplitude")
plt.show()

Explanation:

The estimated background is the mean across traces at each time sample; repeating it laterally shows the component being removed.

Comparing the two sections highlights how horizontal banding is reduced and diffractive features become clearer.

Example 3 — Robust background removal using median trace

Mean subtraction can be sensitive to a few large amplitudes. A median is more robust when strong reflectors or outliers are present. This example subtracts the per-sample median across traces.

import numpy as np
import matplotlib.pyplot as plt

def remove_background_median(data):
    # Estimate background as the median across traces at each time sample (robust to outliers)
    background_med = np.median(data, axis=1, keepdims=True)
    # Subtract the median background from every trace
    return data - background_med, background_med

# Apply robust background removal
data_bg_removed_med, estimated_background_med = remove_background_median(agc_result)

# Plot result
plt.figure(figsize=(12, 5))
plt.imshow(data_bg_removed_med, cmap="gray", aspect="auto", origin="upper", interpolation="none")
plt.title("Section after Background Removal (median trace subtraction)")
plt.xlabel("Trace Number")
plt.ylabel("Time Sample")
plt.colorbar(label="Amplitude")
plt.show()

Explanation:

Median across traces at each time sample suppresses persistent horizontal components while resisting the influence of a few large events.

Resulting sections often show cleaner backgrounds when strong isolated reflectors are present.

Content from Time-Zero Correction


Last updated on 2025-09-07 | Edit this page

Overview

Questions

  • What does time-zero represent in a GPR trace?
  • Why do system delays and antenna geometry cause time-zero to be shifted?
  • How can first breaks be detected automatically in traces?
  • What happens to interpretation if time-zero is not corrected?
  • How can we compare sections before and after time-zero correction?

Objectives

  • Define time-zero and explain its importance for accurate depth conversion.
  • Identify common causes of time-zero shifts in GPR data.
  • Demonstrate an automatic time-zero correction workflow using first-break detection.
  • Visualize the distribution of first-break picks and assess their consistency.
  • Compare radargrams before and after correction to evaluate improvements.
Key Points
  • Time-zero is the instant when the transmitted pulse enters the ground.
  • System delays and geometry shift the apparent time-zero away from zero samples.
  • Automatic methods detect first breaks using the signal envelope and amplitude thresholds.
  • Aligning traces to a common first break ensures consistent interpretation.
  • Time-zero correction is essential for reliable time-to-depth conversion and structural imaging.

Time-Zero in GPR

In Ground Penetrating Radar (GPR), time-zero is the reference point corresponding to the instant when the transmitted electromagnetic pulse leaves the antenna and enters the medium. Ideally, this should appear at the start of every recorded trace. In practice, traces often show a shift in time-zero due to:

  • electronic delays in the system trigger and recording,
  • physical separation between transmitting and receiving antennas,
  • coupling effects between antennas and the ground surface.

These shifts cause reflections to appear later than their true arrival time. If left uncorrected, such misalignment leads to errors in estimating depths and comparing events across traces.

Time-zero correction realigns all traces so that their effective start time corresponds to the true wave entry. This process improves consistency and ensures that travel times can be correctly converted to depth using velocity models.

In this lesson we use an automatic first-break detection method. The Hilbert transform is applied to compute the signal envelope, and a threshold relative to the maximum amplitude is used to pick the first significant arrival in each trace. The median of these first breaks defines the common alignment point. Traces are then shifted to this reference, and the section is trimmed so the new time axis begins at zero.

This correction is essential before velocity analysis, migration, or quantitative interpretation of GPR data. It is a standard preprocessing step in both engineering and archaeological applications.

Example 1 — Automatic Time-Zero Correction using First-Break Detection


The following code implements an automatic method to detect first breaks in GPR traces and realign all traces so that their time-zero is consistent. This approach uses the Hilbert transform to compute the signal envelope and an amplitude threshold to identify the first arrival.

PYTHON

import numpy as np
import matplotlib.pyplot as plt
from obspy.io.segy.segy import _read_segy
from scipy.signal import hilbert
import seaborn as sns

def detect_first_break(trace, threshold_ratio=0.05, min_sample=5):
    """
    Detect the first break index using an amplitude threshold on the envelope,
    starting the search from a specified minimum sample.
    """
    envelope = np.abs(hilbert(trace))
    threshold = threshold_ratio * np.max(envelope)
    for i in range(min_sample, len(envelope)):
        if envelope[i] > threshold:
            return i
    return min_sample  # fallback

def time_zero_correction_auto(data, threshold_ratio=0.05, min_sample=5):
    """
    Automatically estimate the target sample (median of all first breaks),
    align all traces to it, and trim the section to re-zero time.
    """
    n_samples, n_traces = data.shape
    shifts = [detect_first_break(data[:, i], threshold_ratio, min_sample) for i in range(n_traces)]

    # Estimate target sample using median of detected first breaks
    target_sample = int(np.median(shifts))
    print(f"Estimated target sample for alignment: {target_sample}")

    corrected = np.zeros_like(data)
    for i in range(n_traces):
        shift = shifts[i] - target_sample
        trace = data[:, i]
        shifted_trace = np.zeros_like(trace)
        if shift >= 0:
            shifted_trace[:len(trace)-shift] = trace[shift:]
        else:
            shifted_trace[-shift:] = trace[:len(trace)+shift]
        corrected[:, i] = shifted_trace

    # Trim the top 'target_sample' rows so time-zero starts from there
    corrected_trimmed = corrected[target_sample:, :]
    return corrected_trimmed, shifts, target_sample

# --- Load GPR (SEGY) data ---
segy_file = "LINE01.sgy"
stream = _read_segy(segy_file, headonly=False)

# Build 2D data array [samples x traces]
n_traces = len(stream.traces)
max_len = max(len(tr.data) for tr in stream.traces)
data = np.zeros((max_len, n_traces))
for i, tr in enumerate(stream.traces):
    data[:len(tr.data), i] = tr.data

# --- Auto Time-Zero Correction ---
threshold_ratio = 0.05
min_sample = 5
corrected_data, shifts, target_sample = time_zero_correction_auto(data, threshold_ratio, min_sample)

# --- First-Break Shift Histogram ---
plt.figure(figsize=(10, 3))
sns.histplot(shifts, bins=20, kde=True)
plt.title("Distribution of First-Break Shifts (Samples)")
plt.xlabel("Sample")
plt.ylabel("Frequency")
plt.tight_layout()
plt.show()

# --- Overlay Original vs Corrected for One Trace ---
plt.figure(figsize=(10, 4))
idx = 1000
plt.plot(data[:, idx], label="Original", alpha=0.6)
aligned = np.zeros_like(data[:, idx])
shift = shifts[idx] - target_sample
if shift >= 0:
    aligned[:len(aligned) - shift] = data[:, idx][shift:]
else:
    aligned[-shift:] = data[:, idx][:len(aligned) + shift]
plt.plot(aligned[target_sample:], label="Corrected", linestyle="--")
plt.title(f"Trace {idx} - Original vs Auto-Aligned")
plt.xlabel("Sample")
plt.ylabel("Amplitude")
plt.legend()
plt.grid(True)
plt.tight_layout()
plt.show()

# --- Plot First-Break Picks on Sample Traces ---
plt.figure(figsize=(10, 6))
for i in range(5):
    idx = i * (n_traces // 5)
    plt.plot(data[:, idx], label=f"Trace {idx}")
    plt.axvline(x=shifts[idx], color='r', linestyle='--')
plt.axhline(target_sample, color='k', linestyle=':', label='Auto Target Sample')
plt.title("First-Break Picks on Sample Traces")
plt.xlabel("Sample")
plt.ylabel("Amplitude")
plt.legend()
plt.grid(True)
plt.tight_layout()
plt.show()

# --- Full Section: Before vs After Correction ---
fig, axs = plt.subplots(1, 2, figsize=(14, 6), sharey=False)

axs[0].imshow(data, cmap="gray", aspect="auto", origin="upper")
axs[0].set_title("Before Time-Zero Correction")
axs[0].set_xlabel("Trace")
axs[0].set_ylabel("Sample")

axs[1].imshow(corrected_data, cmap="gray", aspect="auto", origin="upper")
axs[1].set_title(f"After Auto Time-Zero Correction (Aligned to Sample {target_sample})")
axs[1].set_xlabel("Trace")

plt.tight_layout()
plt.show()

Step-by-step explanation

First-break detection

The Hilbert transform computes the analytic signal; its absolute value gives the envelope.

The code sets a threshold at 5% (threshold_ratio=0.05) of the maximum envelope.

The first sample exceeding this threshold (after min_sample) is picked as the first break.

Automatic alignment

All traces are analyzed, producing a list of first-break indices (shifts).

The median of these indices defines the target sample for alignment.

Each trace is shifted so that its first break matches this target.

Section trimming

Samples above the target are removed, so the new section begins at a common time-zero.

Quality control plots

A histogram of first breaks shows their distribution and helps check consistency.

A trace overlay (original vs corrected) illustrates the alignment.

Vertical lines on sample traces visualize first-break picks.

Side-by-side radargrams show the effect of correction at the section scale.

Interpretation

After correction, all traces share the same starting point, enabling accurate depth conversion.

Misalignments due to system delays are removed, improving reflector continuity.

The method is automatic but can be adjusted (e.g., threshold ratio, minimum sample) to fit data characteristics.

Example 2 — Manual Time-Zero Correction using a Reference Trace


In some cases, automatic picking may not be reliable (e.g. noisy data, strong early clutter).
A simpler approach is to pick a reference trace manually and align all other traces to its first break.

# --- Pick a reference trace manually ---
ref_idx = 500  # example: middle trace as reference
ref_trace = data[:, ref_idx]

# Detect first break on reference trace
ref_fb = detect_first_break(ref_trace, threshold_ratio=0.05, min_sample=5)
print(f"Reference trace index: {ref_idx}, First break sample: {ref_fb}")

# Detect first breaks on all traces
all_fb = [detect_first_break(data[:, i], threshold_ratio=0.05, min_sample=5) for i in range(n_traces)]

# Align traces to the reference first break
corrected_manual = np.zeros_like(data)
for i in range(n_traces):
    shift = all_fb[i] - ref_fb
    trace = data[:, i]
    shifted_trace = np.zeros_like(trace)
    if shift >= 0:
        shifted_trace[:len(trace)-shift] = trace[shift:]
    else:
        shifted_trace[-shift:] = trace[:len(trace)+shift]
    corrected_manual[:, i] = shifted_trace

# Trim so time-zero begins at the reference first break
corrected_manual = corrected_manual[ref_fb:, :]

# Plot before and after
fig, axs = plt.subplots(1, 2, figsize=(14, 6), sharey=False)
axs[0].imshow(data, cmap="gray", aspect="auto", origin="upper")
axs[0].set_title("Before Manual Time-Zero Correction")
axs[0].set_xlabel("Trace")
axs[0].set_ylabel("Sample")

axs[1].imshow(corrected_manual, cmap="gray", aspect="auto", origin="upper")
axs[1].set_title(f"After Manual Correction (Aligned to Trace {ref_idx})")
axs[1].set_xlabel("Trace")

plt.tight_layout()
plt.show()

Explanation

A reference trace is chosen manually (here: index 500).

Its first break defines the target time-zero.

All other traces are shifted relative to this reference.

Useful when automatic methods pick inconsistent breaks due to noise or strong clutter.

Less robust if the reference trace itself is distorted, so careful selection is required.

Example 3 — Visualizing First-Break Picks Across the Section


To check the reliability of time-zero correction, it is useful to overlay first-break picks on top of the raw section.

# Detect first breaks on all traces (reuse detect_first_break function)
first_breaks = [detect_first_break(data[:, i], threshold_ratio=0.05, min_sample=5) for i in range(n_traces)]

# Plot section with first-break picks overlaid
plt.figure(figsize=(12, 6))
plt.imshow(data, cmap="gray", aspect="auto", origin="upper")
plt.scatter(np.arange(n_traces), first_breaks, color='red', s=10, label='First breaks')
plt.title("Raw Section with First-Break Picks")
plt.xlabel("Trace Number")
plt.ylabel("Sample Index")
plt.legend()
plt.show()

Explanation

Each red point marks the picked first break for one trace.

A consistent horizontal band of picks indicates reliable automatic detection.

Large scatter suggests noise or variable coupling; parameters (threshold, minimum sample) may need adjusting.

This visualization helps students evaluate whether the correction will be successful before applying shifts.

Content from Bandpass Filtering


Last updated on 2025-09-07 | Edit this page

Overview

Questions

  • What is a bandpass filter in the context of GPR signal processing?
  • Why apply a bandpass filter to GPR traces before interpretation?
  • How do sampling frequency and Nyquist frequency constrain the choice of cutoffs?
  • How does zero-phase filtering affect waveform shape compared with causal filtering?
  • How can frequency spectra guide the selection of passband limits?

Objectives

  • Define bandpass filtering and relate it to antenna bandwidth and recorded noise.
  • Compute sampling and Nyquist frequencies from SEG Y headers and validate cutoffs.
  • Apply a zero-phase Butterworth bandpass to a 2D GPR section without altering your code.
  • Compare radargrams and trace spectra before and after filtering to assess effectiveness.
  • Describe trade-offs when tightening or relaxing the passband.
Key Points
  • GPR data often contain low-frequency drift and high-frequency noise; a bandpass suppresses both.
  • Cutoffs must lie within (0, Nyquist); selecting them near the antenna’s effective bandwidth improves SNR.
  • Zero-phase forward-backward filtering (filtfilt) preserves arrival timing and polarity.
  • Overly narrow passbands may distort wavelets or remove useful signal.
  • Frequency spectra of representative traces help justify passband choices quantitatively.

GPR antennas radiate broadband pulses with finite bandwidth. Recorded sections typically include low-frequency components from coupling or instrumentation and high-frequency noise from electronics or environment. A bandpass filter retains a chosen frequency band while attenuating energy below and above it. In practice, cutoffs should respect the Nyquist frequency and bracket the dominant signal band suggested by the antenna type and the observed spectra.

This lesson applies a zero-phase Butterworth bandpass to a 2D section. Zero-phase filtering via forward-backward application avoids phase distortion, so reflector timing is preserved. We compute sampling and Nyquist frequencies from SEG Y headers, apply the filter trace-by-trace, and evaluate the result using both images and amplitude spectra.

Example 1 — Zero-phase Butterworth bandpass on a 2D GPR section


The code below reads a SEG Y profile, derives the sampling frequency, applies a fourth-order Butterworth bandpass with cutoffs as percentages of Nyquist, and compares sections and spectra before and after filtering.

PYTHON

import numpy as np
import matplotlib.pyplot as plt
from scipy.signal import butter, filtfilt
from obspy.io.segy.segy import _read_segy

def bandpass_filter(data, lowcut, highcut, fs, order=4):
    nyquist = 0.5 * fs
    low = lowcut / nyquist
    high = highcut / nyquist
    if not 0 < low < 1 or not 0 < high < 1:
        raise ValueError(f"Cutoff frequencies must be within (0, {nyquist:.2f} Hz). Got low={lowcut}, high={highcut}")
    b, a = butter(order, [low, high], btype='band')
    
    filtered = np.zeros_like(data)
    for i in range(data.shape[1]):
        filtered[:, i] = filtfilt(b, a, data[:, i])
    return filtered

def plot_frequency_spectrum(trace, fs, title):
    """
    Plot frequency amplitude spectrum of a single trace.
    """
    n = len(trace)
    freqs = np.fft.rfftfreq(n, d=1/fs)
    spectrum = np.abs(np.fft.rfft(trace))
    
    plt.plot(freqs / 1e6, spectrum, label=title)  # MHz
    plt.xlabel("Frequency (MHz)")
    plt.ylabel("Amplitude")
    plt.grid(True)

# --- Load SEG-Y GPR data ---
segy_file = "LINE01.sgy"
stream = _read_segy(segy_file, headonly=False)

# Build 2D data array [samples x traces]
n_traces = len(stream.traces)
n_samples = max(len(tr.data) for tr in stream.traces)
data = np.zeros((n_samples, n_traces))
for i, tr in enumerate(stream.traces):
    data[:len(tr.data), i] = tr.data

# --- Sampling rate ---
dt_us = stream.binary_file_header.sample_interval_in_microseconds  # microseconds
if dt_us == 0:
    raise ValueError("Sample interval is missing or zero in SEG-Y header. Set manually if needed.")

fs = 1e6 / dt_us  # Hz
nyquist = fs / 2
print(f"Sampling frequency: {fs/1e6:.2f} MHz, Nyquist: {nyquist/1e6:.2f} MHz")

# --- Dynamically determine bandpass range ---
lowcut = 0.05 * nyquist   # 5% of Nyquist
highcut = 0.95 * nyquist  # 95% of Nyquist
print(f"Applying bandpass filter: {lowcut/1e6:.2f}{highcut/1e6:.2f} MHz")

# --- Apply bandpass filter ---
filtered_data = bandpass_filter(data, lowcut, highcut, fs)

# --- Plot before and after ---
fig, axs = plt.subplots(1, 2, figsize=(14, 6), sharey=True)

axs[0].imshow(data, cmap="gray", aspect="auto", origin="upper")
axs[0].set_title("Original GPR Data")
axs[0].set_xlabel("Trace")
axs[0].set_ylabel("Sample")

axs[1].imshow(filtered_data, cmap="gray", aspect="auto", origin="upper")
axs[1].set_title(f"Bandpass Filtered GPR\n({lowcut/1e6:.1f}{highcut/1e6:.1f} MHz)")
axs[1].set_xlabel("Trace")

plt.tight_layout()
plt.show()

# --- Plot Frequency Spectra ---
middle_trace_idx = n_traces // 2
plt.figure(figsize=(10, 5))
plot_frequency_spectrum(data[:, middle_trace_idx], fs, title="Before Filter")
plot_frequency_spectrum(filtered_data[:, middle_trace_idx], fs, title="After Filter")
plt.title("Frequency Spectrum of Middle Trace")
plt.legend()
plt.tight_layout()
plt.show()

Explanation

Design the filter The cutoffs are normalized by Nyquist to design a fourth-order Butterworth bandpass. This shape is smooth in magnitude and, with forward-backward application, zero-phase in the output.

Validate cutoffs The code checks that both cutoffs fall strictly between zero and Nyquist. Invalid values raise a clear error.

Apply zero-phase filtering Each trace is filtered with filtfilt, which runs the filter forward and backward to cancel phase shift while doubling the effective order.

Compute sampling parameters The sample interval from the SEG Y binary header gives fs in hertz and nyquist = fs/2. These determine the allowable band.

Inspect results in time and frequency Side-by-side images show structural changes after filtering. A frequency spectrum of a representative trace quantifies which bands were attenuated or retained.

Interpretation Suppressing very low and very high frequencies clarifies reflections within the passband. If reflections weaken or look distorted, the passband is too narrow; relax the cutoffs toward the data’s dominant band.

Example 2 — Comparing Multiple Passbands


It is often unclear which cutoff frequencies best enhance reflections. In this example, we apply three different passbands and compare their effects on both the radargrams and frequency spectra.

PYTHON

# Define three passband settings as fractions of Nyquist
passbands = [
    (0.02 * nyquist, 0.80 * nyquist),  # wide band
    (0.05 * nyquist, 0.50 * nyquist),  # medium band
    (0.10 * nyquist, 0.30 * nyquist)   # narrow band
]

fig, axs = plt.subplots(1, 3, figsize=(18, 6), sharey=True)

for i, (lowcut, highcut) in enumerate(passbands):
    filtered = bandpass_filter(data, lowcut, highcut, fs)
    axs[i].imshow(filtered, cmap="gray", aspect="auto", origin="upper")
    axs[i].set_title(f"{lowcut/1e6:.2f}{highcut/1e6:.2f} MHz")
    axs[i].set_xlabel("Trace")
    if i == 0:
        axs[i].set_ylabel("Sample")

plt.tight_layout()
plt.show()

# Plot spectra for the middle trace across all three passbands
plt.figure(figsize=(10, 5))
for (lowcut, highcut) in passbands:
    filtered = bandpass_filter(data, lowcut, highcut, fs)
    plot_frequency_spectrum(filtered[:, middle_trace_idx], fs,
                            title=f"{lowcut/1e6:.2f}{highcut/1e6:.2f} MHz")
plot_frequency_spectrum(data[:, middle_trace_idx], fs, title="Unfiltered")
plt.title("Frequency Spectra for Different Passbands")
plt.legend()
plt.tight_layout()
plt.show()

Explanation

Wide passband keeps nearly all signal but lets noise through.

Medium passband balances suppression of noise with preservation of reflections.

Narrow passband reduces noise strongly but can distort wavelets and weaken hyperbolas.

Comparing spectra confirms which frequencies remain in each case.

Example 3 — Filtering Individual Traces for QC


Before filtering the entire dataset, it is useful to test the filter on a few representative traces. This example applies the same bandpass to three traces and compares them with the originals.

# Select three traces across the profile
trace_indices = [n_traces // 4, n_traces // 2, 3 * n_traces // 4]

lowcut = 0.05 * nyquist
highcut = 0.95 * nyquist

plt.figure(figsize=(12, 8))

for i, idx in enumerate(trace_indices, 1):
    original = data[:, idx]
    filtered = bandpass_filter(data[:, [idx]], lowcut, highcut, fs).flatten()

    plt.subplot(3, 1, i)
    plt.plot(original, label="Original", alpha=0.6)
    plt.plot(filtered, label="Filtered", linestyle="--")
    plt.title(f"Trace {idx} before and after bandpass")
    plt.xlabel("Sample")
    plt.ylabel("Amplitude")
    if i == 1:
        plt.legend()

plt.tight_layout()
plt.show()

Explanation

Original vs filtered traces plotted directly help confirm that filtering removes unwanted frequencies without destroying arrivals.

Students can visually check if the wavelet shape is preserved.

This is a simple quality-control step before applying the filter to the full dataset.

Content from Deconvolution


Last updated on 2025-09-07 | Edit this page

Overview

Questions

  • What is deconvolution in the context of GPR processing?
  • Why do recorded GPR signals differ from the true earth response?
  • How does spectral deconvolution recover sharper reflections?
  • What is the role of the stabilization parameter in deconvolution?
  • How can we evaluate the effect of deconvolution on traces and sections?

Objectives

  • Define deconvolution and explain its purpose in GPR signal processing.
  • Implement frequency-domain (spectral) deconvolution on individual traces and full sections.
  • Assess the impact of stabilization on the recovered signal.
  • Compare radargrams before and after deconvolution to evaluate reflector clarity.
  • Use single-trace examples to illustrate wavelet shortening.
Key Points
  • GPR records are a convolution of the source wavelet and subsurface reflectivity.
  • Deconvolution aims to remove the source wavelet, recovering a spike-like reflectivity.
  • Spectral deconvolution divides the spectrum by its amplitude, with stabilization to avoid noise blow-up.
  • Spiking deconvolution sharpens arrivals but requires careful parameter tuning.
  • Stabilization controls the trade-off between resolution gain and noise amplification.

Ground Penetrating Radar (GPR) does not record the true reflectivity of the subsurface. Instead, each trace represents the convolution of the antenna source wavelet with reflections from interfaces in the ground. This convolution blurs the reflectivity, producing wavelets rather than sharp spikes.

Deconvolution is a signal processing step that attempts to undo this effect by compressing the wavelet into something closer to a spike. The goal is improved temporal resolution, so that thin layers and closely spaced reflectors can be better distinguished.

A common approach is spectral deconvolution. In the frequency domain, the observed trace spectrum is divided by its amplitude spectrum (with a small stabilization factor added to prevent division by near-zero values). The phase is preserved. After inverse Fourier transform, the resulting trace shows sharpened reflections.

The stabilization constant is critical: too small and noise dominates; too large and resolution gains are lost. In practice, deconvolution is applied trace-by-trace across the section and results are compared to the original radargram to confirm improved clarity without excessive noise.

Example 1 — Spectral Deconvolution on a 2D GPR Section


The following code implements spectral deconvolution for each trace of a SEG-Y GPR section and compares the radargram before and after processing.

PYTHON

import numpy as np
import matplotlib.pyplot as plt
from obspy.io.segy.segy import _read_segy
from scipy.fft import rfft, irfft, rfftfreq

def spectral_decon(trace, fs, stab=0.01):
    """
    Apply frequency-domain (spectral) deconvolution to a trace.
    """
    n = len(trace)
    f = rfftfreq(n, d=1/fs)
    spectrum = rfft(trace)
    amp = np.abs(spectrum)
    phase = spectrum / (amp + 1e-12)  # preserve phase

    # Invert amplitude spectrum with stabilization
    inverse_amp = 1 / (amp + stab * np.max(amp))
    decon_spectrum = inverse_amp * spectrum

    # Back to time domain
    decon_trace = irfft(decon_spectrum, n=n)
    return decon_trace

# --- Load SEG-Y GPR data ---
segy_file = "LINE01.sgy"
stream = _read_segy(segy_file, headonly=False)

# Build 2D data array [samples x traces]
n_traces = len(stream.traces)
n_samples = max(len(tr.data) for tr in stream.traces)
data = np.zeros((n_samples, n_traces))
for i, tr in enumerate(stream.traces):
    data[:len(tr.data), i] = tr.data

# --- Sampling frequency ---
try:
    dt_us = stream.binary_file_header.sample_interval_in_microseconds
    if dt_us == 0 or dt_us > 10:
        print(f"Header sample interval {dt_us} µs seems invalid. Overriding to 0.5 µs.")
        dt_us = 0.5
except:
    print("Sample interval not found. Using default of 0.5 µs.")
    dt_us = 0.5

fs = 1e6 / dt_us
print(f"Sampling rate: {fs/1e6:.2f} MHz")

# --- Apply spectral deconvolution ---
decon_data = np.zeros_like(data)
for i in range(n_traces):
    decon_data[:, i] = spectral_decon(data[:, i], fs=fs, stab=0.5)

# --- Plot comparison ---
fig, axs = plt.subplots(1, 2, figsize=(14, 6), sharey=True)
axs[0].imshow(data, cmap="gray", aspect="auto", origin="upper")
axs[0].set_title("Original GPR Data")
axs[0].set_xlabel("Trace")
axs[0].set_ylabel("Sample")

axs[1].imshow(decon_data, cmap="gray", aspect="auto", origin="upper")
axs[1].set_title("Spectral Deconvolved GPR Data")
axs[1].set_xlabel("Trace")

plt.tight_layout()
plt.show()

Explanation:

Fourier transform each trace → separate amplitude and phase.

Invert amplitude spectrum, with stabilization constant to prevent division by zero.

Recombine with phase and inverse transform → sharpened trace.

Apply across all traces to produce deconvolved section.

Compare radargrams before and after → reflections appear narrower and better resolved.

Example 2 — Spiking Deconvolution on a Single Trace


Spiking deconvolution uses the autocorrelation of a trace to estimate its wavelet and design a filter that compresses it into a spike.

from scipy.linalg import toeplitz, solve_toeplitz

def spiking_decon(trace, stab=0.1, filter_length=30):
    """
    Spiking deconvolution using Wiener filter design.
    """
    # Autocorrelation
    autocorr = np.correlate(trace, trace, mode="full")
    autocorr = autocorr[autocorr.size // 2:]

    # Toeplitz system
    R = toeplitz(autocorr[:filter_length])
    r = np.zeros(filter_length)
    r[0] = 1.0  # desired spike

    # Stabilization
    R += stab * np.eye(filter_length)

    # Solve Wiener-Hopf equations
    f = np.linalg.solve(R, autocorr[:filter_length])
    decon = np.convolve(trace, f, mode="same")
    return decon

# Apply to one representative trace
trace_idx = n_traces // 2
original_trace = data[:, trace_idx]
decon_trace = spiking_decon(original_trace, stab=0.1, filter_length=30)

plt.figure(figsize=(10, 4))
plt.plot(original_trace, label="Original", alpha=0.7)
plt.plot(decon_trace, label="Spiking Deconvolved", linestyle="--")
plt.title(f"Trace {trace_idx} before and after Spiking Deconvolution")
plt.xlabel("Sample")
plt.ylabel("Amplitude")
plt.legend()
plt.grid(True)
plt.show()

Explanation

Estimate wavelet from autocorrelation of the trace.

Solve Wiener equations to design filter that compresses wavelet into a spike.

Convolve filter with trace → spike-like reflectivity series.

Useful for single traces or small windows where spectral method is unstable.

Example 3 — Effect of Stabilization on Spectral Deconvolution


The stabilization constant (stab) controls the balance between resolution gain and noise. This example applies different values and compares results.

stab_values = [0.01, 0.1, 1.0]

fig, axs = plt.subplots(1, 3, figsize=(18, 6), sharey=True)

for i, stab in enumerate(stab_values):
    decon_data_test = np.zeros_like(data)
    for j in range(n_traces):
        decon_data_test[:, j] = spectral_decon(data[:, j], fs=fs, stab=stab)
    axs[i].imshow(decon_data_test, cmap="gray", aspect="auto", origin="upper")
    axs[i].set_title(f"Spectral Deconvolution\nstab={stab}")
    axs[i].set_xlabel("Trace")
    if i == 0:
        axs[i].set_ylabel("Sample")

plt.tight_layout()
plt.show()

Explanation

Small stab (0.01): strong wavelet compression but high noise amplification.

Moderate stab (0.1): balanced improvement; often practical.

Large stab (1.0): stable but reflections less sharpened.

Students can see the trade-off and understand why stabilization is critical.

Content from Wrap-up and Discussion


Last updated on 2025-09-07 | Edit this page

Overview

Questions

  • What are the main processing steps applied to GPR data in this course?
  • How do these steps improve interpretability of radargrams?
  • Which processing choices require the most careful parameter selection?
  • How can Python be used flexibly to test and evaluate different workflows?

Objectives

  • Summarize the core processing steps learned: visualization, AGC, background removal, time-zero correction, bandpass filtering, and deconvolution.
  • Reflect on how each step alters the data and what geophysical problem it addresses.
  • Encourage critical evaluation of parameters and their impact on results.
  • Prepare learners to design their own GPR processing workflows for new datasets.
Key Points
  • GPR traces are raw recordings that must be processed for reliable interpretation.
  • Each processing step—AGC, background removal, time-zero correction, filtering, deconvolution—has a clear physical motivation.
  • Parameters (e.g., filter cutoffs, stabilization constants) control the trade-off between resolution and noise.
  • No single workflow is universal; effective GPR interpretation requires testing, visual inspection, and critical judgment.
  • Python offers a transparent environment for experimenting with and combining different methods.