Author ORCID Identifier

https://orcid.org/0000-0002-9951-2019

Document Type

Dataset

Rights

Available under a Creative Commons Attribution Non-Commercial Share Alike 4.0 International Licence

Grant Number

18/CRT/6222

Disciplines

Computer Sciences

Publication Details

The Geographical Information Analytics(GIA) Track at SAC 2023

Abstract

Automatic detection of road intersections is an important task in various domains such as navigation, route planning, traffic prediction, and road network extraction. Road intersections range from simple three-way T-junctions (degree 3) to complex large-scale junctions with many branches. The location of intersections and their complexity is an important consideration in route planning, such as the requirement to avoid complex intersections on pedestrian journeys. This is relevant to vulnerable road users such as People with Blindness or Visually Impairment (PBVI) or children. Route planning applications, however, do not give information about the location or complexity of intersections as this information is not available at scale. As a first step to solving this problem, a mechanism for automatically mapping road intersection location and complexity is required, ideally using a globally available data source.

In this paper, we propose a deep learning framework to automatically detect the location and degree of intersections from satellite images using convolutional neural networks. For this purpose, we labelled 7,342 \hl{Google maps images} from Washington, DC, USA to create a dataset. This dataset covers a region of 58.98 km$^{2}$ and has 7548 intersections. We then applied a recent object detection model (EfficientDet) to detect the location of intersections followed by a classification model (EfficientNet) to calculate their degree. Experiments based on the road network in Washington, DC, show that the accuracy of intersections detections is \hl{88.6\%, within 5 meters} Most of our predicted centre of the intersections (≈ 80\%) are within 2m of the ground truth centre. Additionally, our method detects the degree of intersections with an accuracy of 68.68\%. We have published an automation script to enable the reproduction of our dataset for other researchers. This work is beneficial not only for PBVI but for society overall.

DOI

https://doi.org/10.21427/g1pb-0s89

Methodology

Dataset Automatic Acquisition and Annotation The datasets contain satellite and hybrid images with annotation for intersection detection task. The two datasets created here should be made available to the scientific community. Direct publication or dissemination to third parties is prohibited per the Google API terms. We have supplied an automated script to create replica datasets using the Google maps static API, but utilizing the downloader's API key, so that other researchers can access the dataset. The script will automatically retrieve the identical pictures for satellite or hybrid and apply the proper annotation. As of the date of publishing, the pictures included in these datasets were freely available for download using the Google Maps static API. To curate the dataset, we first downloaded two types of images (satellite and hybrid) from Google Maps and create two datasets: a satellite intersection dataset and a hybrid intersection dataset. Each dataset has four main classes depending on the number of road segments that construct the intersection; 3-intersection, 4-intersection, 5-intersection, and 6-intersection. Further to these classes, some images have more than one intersection in the same image. This folder contains the annotation folder for our dataset and a script file: 1. The annotation folder contains five sub-folders; 3-intersection, 4-intersection, 5-intersection, 6-intersection, and more than one sub-folders. Each sub-folder contians annotation files (XML) for each degree of intersections where inofrmation about the degree and location of each intersection are inculded. 2. The script file uses Python code to download images from Google using the maps static API and create a dataset folder. The dataset folder will contain five sub-folders; 3-intersection, 4-intersection, 5-intersection, 6-intersection, and more than one sub-folders You must specify the api key, maptype, annotation folder path, and destination path arguments in order to run this script. 1. api_key: You must create a project with a billing account in order to get images from Google Map. The maps static API has to be enabled after that. Use this API key in our script after that. Follow the directions in this link for further information on how to develop this API. https://developers.google.com/maps/documentation/maps-static/get-api-key 2. maptype: Which image type you wish to include in the dataset must be determined. We employed satellite or hybrid types. Make sure to edit this field before running the script. 3. annottation_folder_path: Put the path for the annotation folder that associated with this script. 4. destination_path: Set the location to the folder where the dataset will be created in this field.

Data Owner

yes

Funder

Science Foundation Ireland

Creative Commons License

Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License
This work is licensed under a Creative Commons Attribution-NonCommercial-Share Alike 4.0 International License.

Share

 
COinS