Quick Start Guide

Get started with the basics of GRYFN Processing Tool by following this Quick Start Guide.

Workflow

Processing in the GRYFN Processing Tool is meant to follow a simple workflow for the user while still providing a high level of transparency and customization. The figure below illustrates the basic workflow of data processing with the GRYFN Processing tool.

Workflow

Settings

Prior to using GRYFN Processing Tool for data processing, it is important to visit the Settings page. Set default file and folder locations now to ensure all dependencies are allocated, and to save time and clicks later.

  • For NanoHP users, make sure to set your radioCal location for automatic raw to radiance conversion.

  • For SBG users, make sure to set the path to the Qinertia CLI, and login to your SBG account.

  • For Applanix users, make sure to set your:

    • For local installs: POSPac Batch Runner Executable, and install the custom trajectory export format.

    • For Cloud licenses: Enter your Cloud ID, Client Secret, and ensure the Production server is set.


Bundle Data

To process data in GRYFN Processing Tool, raw data must be organized into a standard structure with appropriate metadata. This process is called Data Bundling.

Opening the Bundle Data home screen presents three options:

Create a New GRAW Bundle

Page 1

  1. Open the Bundle Data page.

  2. Create New GRAW Bundle.

  3. Optionally, select a Processing Extent file (.geojson, KML, or SHP).

  4. Select the raw data source directory.

    1. Single Source: when all collected flight data is stored in one file directory, even with sub-folders.

    2. Multi Source: If raw flight data is split into multiple separate directories, or stored on separate drives. For each raw data type, locate the matching raw data directory.

  5. Select a Logs directory, if applicable.

The logs selection field will copy the entire directory to the GRAW folder. This is a good way to record extra notes, images, and files relevant to the data collection. Ensure this is set to a unique directory, rather than the raw data source directory (it can be a sub-directory of the raw data source).

Page 2

Optionally, input Pilot, Location, and Weather Condition information.

Page 3

  1. Set a name for the bundle.

  2. Browse for a location to save the bundle.

  3. Press Create Bundle.

You will be brought to the Job Queue screen and bundling will begin.

Once the data finishes bundling you will receive a desktop notification saying the job has completed, and the job will move down to Completed Jobs.

Add to Existing Bundle

If parts of a dataset have already been bundled, but additional data needs to be added to the bundle, use the Add to Existing Bundle feature.

  1. Open the Bundle Data page

  2. Add to Existing Bundle

  3. Select GRAW Bundle

  4. Add any additional data required

Download from Sensor (GRYFN Gobi Only)

With GRYFN Gobi remote sensing systems, GNSS, LiDAR, and VNIR data can be downloaded directly from the sensor's internal storage.

  1. Open the Bundle Data page

  2. Download from Sensor

  3. Selec the available sensor

  4. Select the desired dataset

  5. Select a download directory

After the data has finished downloading, you will automatically be brought to the bundle data page with the system calibration, data source, and logs directories selected. Continue on to create your GRAW Bundle.

Hyperspectral Radiance Conversion

Headwall Nano HP and Specim raw DNs are automatically converted to radiance in GRYFN Processing Tool. Legacy Headwall Nano HS systems and Headwall SWIR systems must have their raw data converted to radiance in Headwall software (SpectralView or BatchProcessWidget)

Nano HP Radiance

Headwall nHP raw data will be converted to radiance during the final stages of VNIR processing. In order for this process to work properly, the folder containing radioCal data must be selected in your Settings Default Radiometric Calibration Location.

Legacy Headwall nHS and SWIR Radiance

Legacy Nano HS and SWIR data must have have raw data converted to radiance data prior to processing in GRYFN Processing Tool, otherwise GRYFN Processing Tool will perform operations on uncalibrated raw data. Legacy Nano HS systems can use Headwall's SpectralView for radiance conversion, while SWIR systems should use the BatchProcessTool.

Nano HS Radiance Processing
STEP
ACTION

1

Open Spectral View

2

Batch

3

Radiance Batch

4

Select HDR files

5

Next Page

6

Select Dark Reference file

7

Next Page

8

Start

SWIR Radiance Processing
STEP
ACTION

1

Temporarily remove "." from GRAW name if already bundled

2

Temporarily move SWIR dark folder into SWIR light folder

3

Open Batch Process Widget

4

Under Flight, select Browse

5

Select SWIR directory

6

Under Select Function, choose Rd

7

Process Flight

8

This will perform raw -> rdk -> rd

9

Move SWIR dark folder out of SWIR light folder

10

Add "." back to GRAW folder

Process Data

The New Job page is where a processing pipeline can be selected with a GRAW bundle to process data. This page allows the storage and creation of processing pipelines, setting up processing jobs, and submitting data for processing.

Manual GNSS Processing

When the correct settings are applied, GRYFN Processing Tool will automatically call Qinertia or POSPac to process SBG or Applanix GNSS data through their command line tools in the background. However, should the user wish to process GNSS data manually, a brief overview of the workflow is described in the tables below. For in-depth explanations on specific settings, parameters, and customizations, please see the relevant documentation from the sensor manufacturer.

Prior to manually processing GNSS data, ensure GRYFN's custom export formats have been placed in the necessary directories. Please see the Settings page.

SBG Qinertia - GUI Processing
STEP
ACTION

1

Open Qinertia

2

New Project

3

Name Project

4

Choose Project Save Path

5

Import raw GNSS data, Next

6

Ensure Motion Profile "UAV" is detected

7

Toggle Advanced Geodesy

8

Change Project Coordinate System to desired CRS (usually WGS 84 or NAD 83 / UTM zone ##), Next x2

9

In Trajectory Preview, ensure trajectory matches expectations for the mission, and that there are no significant data gaps, Next

10

Use automated base station selection, choose a specific base station from the map, or upload your own RINEX base data, Next x5

11

Ensure Tight coupling PPK is available

12

Click Finish

13

Locate the processing button, third from the right

14

Select Tightly Coupling PPK

15

When processing is finished:

16

File -> Exports

17

Select the GRYFN Export and Event profile

18

Launch Exports

19

Close Qinertia (project is saved automatically after each step)

POSPac UAV - GUI Processing
STEP
ACTION

1

Open POSPac

2

New Default Project

3

Save Project

4

Import T04 files

5

Check that trajectory matches expectations and that there are no data gaps

6

Find Base Stations or import your own RINEX base data (if using RINEX base data, skip to step 10)

7

Smart Select

8

Choose SingleBase if nearest base station is < 20km, otherwise choose SmartBase

9

SmartBase Quality Check (if using SmartBase)

10

If using RINEX base data, using the dropdowns on the left, find the base station, right click, and choose Compute RTX Coordinates. After this is complete, right click again and choose Set Base Station

11

GNSS-Inertial Processor

12

All Processing

13

Tools

14

Export

15

Update Mapping Frame if you choose

16

Choose GRYFN v0.3 Export File Format

17

Export

18

Save Project

19

Close POSPac

Creating Pipelines

A pipeline is a set of instructions and parameters for processing data. To make a pipeline, navigate to the New Job page:

  1. New Pipeline

    1. To import pipelines distributed by GRYFN, or shared from others in your organization, click the Import Pipeline button.

  2. New Task

  3. Select Sensor Tasks

    1. For each sensor, edit processing configuration parameters as needed.

  4. Add data products for each sensor

    1. Edit data product configuration parameters as needed.

    2. Some products rely on elevation sources, such as mosaics and colorized point clouds. Make sure to include a LiDAR DSM/Point Cloud product, or add your own external elevation data, should you choose to generate these products.

Created pipelines can be added to folders. Right click on a pipeline from the New Job page to create a folder for it, or move it to an existing folder. They can also be removed from folders by right clicking a pipeline in a folder. If all pipelines are removed from a folder, that folder will be deleted.

Pipelines can be editing by selecting a pipeline and clicking the Edit button. Pipelines can also be exported to share with other users by selecting a pipeline and clicking the Export button.

Create a New Job

A job is a data processing task. To process data, create a new job:

  1. Select the New Job page

  2. Select a processing pipeline

  3. Select a .GRAW bundle

  4. If processing GNSS data, ensure Force Reprocess is toggled

  5. If processing hyperspectral data, and reflectance data is desired, perform target selection for ELM correction

  1. Scroll down to find the hyperspectral target selection area

  2. Click Browse next to Select Target Files, then load the target calibration files

  3. Locate the hyperspectral image(s) with reflectance targets visible

  4. Click Draw Target Bounds

    1. Navigation controls are visible by hovering over the tooltip icon

  5. Select each target from the dropdown, then draw a small box over the corresponding target

    On the right side of the window you will see a correlation graph based on the target selection drawing

    1. Make sure your selection includes only pixels on the target

    2. The difference between minimum and maximum values at a given wavelength should be relatively small. If they are not, consider re-measuring the target.

After drawing three targets you’ll see an R^2 Measurement Feedback graph. The R^2 graph is color coded to denote good or bad measurements.

Click Save after all targets have been measured.

If targets are visible in other cubes, they can also be measured.

Prior to submitting a job, parameters for a pipeline can be edited on an individual submission basis. This means changes made here will not affect the pipeline file, only the parameters for this particular job. Click on Pipeline Summary before submitting and all parameters will be available to edit.

Exclusive to Pipeline Summary editing is a General tab which gives some additional options relevant to a particular job.

Processing with Additional Scripts

GRYFN Processing Tool supports user-added python scripts. If you have additional scripts you would like to run with your pipeline, add them to your pipeline as a new task.

Scripts can be placed in your Default Script Location (configurable through Settings -> Scripts).

Job Submission

Clicking Submit Job will begin data processing and move you to the Job Queue page to monitor the job in progress.

Job Queue

Clicking on Job Queue shows what is currently being processed, what is in the queue, and what has been completed. Multiple jobs can be queued up in series to begin processing once the previous job has finished.

Last updated