{"response":{"docs":[{"system_create_dtsi":"2025-08-23T20:11:49Z","system_modified_dtsi":"2025-11-13T15:32:32Z","has_model_ssim":["Dataset"],"id":"3f462716x","accessControl_ssim":["dc9cb8a6-792c-4280-842f-62cbac5ba6bd"],"hasRelatedMediaFragment_ssim":["79407z82r"],"hasRelatedImage_ssim":["79407z82r"],"depositor_ssim":["brittobj@ucmail.uc.edu"],"depositor_tesim":["brittobj@ucmail.uc.edu"],"title_tesim":["Q2000 Deep Learning Model Package"],"date_uploaded_dtsi":"2025-08-23T20:11:47Z","date_modified_dtsi":"2025-11-13T15:32:32Z","isPartOf_ssim":["admin_set/default"],"doi_tesim":["doi:10.7945/tbvg-8n31"],"alternate_title_tesim":["Q2000 Deep Learning Model for detecting Maya Ruins in Lidar"],"geo_subject_tesim":["Yucatan Peninsula, Mexico"],"time_period_tesim":["Pre-classic to Post-classic Maya"],"college_tesim":["Arts and Sciences"],"department_tesim":["Geography, Anthropology"],"required_software_tesim":["ArcGIS Pro v 3.3 or newer with Spatial Analyst Extension enabled"],"note_tesim":["This project is an experimental work in progress intended to examine the practicability of creating a pan-Yucatan, multi-regional, broadscale model that could be used by archaeologists to more simply detect and inventory Maya structures for research and analysis by using Lidar data with a deep learning tool. \r\n\r\nPlease feel free to contact the developer for more information and updates, \r\n\r\n-benb\r\nBenjamin Britton\r\nbrittobj@mail.uc.edu\r\nbenjaminbritton@yahoo.com"],"creator_tesim":["Britton, Benjamin"],"publisher_tesim":["Benjamin Britton"],"subject_tesim":["Art, Geography, Anthropology, Archaeology, Computer Science"],"language_tesim":["English"],"description_tesim":["Q2000 Deep Learning Model Package\r\n\r\nThis Technical Resource Bundle provides the Q200 Deep Learning model for open access download and use. \r\nThe Q2000 DL model is built to detect Maya structures in Lidar data visualized at one meter per pixel. Currently, this repository contains the ESRI ArcGIS compatible DL model in the format .dlpk. We expect to convert the model in its current form into Pytorch (.pt) and TensorFlow (.h5) formats to incude them also here for user access. \r\n\r\nThe Q2000.dlpk file was created in 2025 at University of Cincinnati by Benjamin Britton, using ArcGIS Pro v.3.3 with data from the NASA Ames G-LiHT. It is intended as an experiment to evaluate the practicability of creating a broadscale deep learning model that can be used effectively to identify Maya structures in Lidar data across the length and breadth of the Yucatan Peninsula. \r\n\r\nThe Q2000 model is the subject of an article, a draft of which is also included in this dataset, called \"Evaluating Broadscale Deep Learning for Maya Settlement Detection in G-LiHT Lidar\" which examines the process and rationale of the model development in detail. The article has been accepted for publication and this site is included in that article with a link to this permanent (DOI) publication site. \r\n\r\nTo use the model with ArcGIS Pro, use a Lidar dataset converted to a 1m/pixel DEM file and visualized as a 3-channel RGB Hillshade or other customized visualization as source input. \r\n\r\n-Using the ArcGIS Pro Spatial Analyst Extension, the geoprocessing tool called \"Detect Object Using Deep Learning\" may be invoked. \r\n-For Input Raster, add your Lidar visualization (a Hillshade visualization might be easiest for most users). \r\n-For Output Detected Objects, specify a new Layer name, and this will be the layer on which the detections will be recorded and displayed. \r\n-For Model Definition, use Q2000.dlpk. \r\n-Unless you want to run \"arguments\", you can leave the Arguments boxes as their Default. \r\n-I suggest checking the box (On) for Non Max suppression because it will reduce the amount of overlapping detections if target objects are located very close to each other, and I suggest a Non-Maximum Suppression (NMS) ratio of 0.5. This will tend to suppress detections that overlap by more than 50 percent.\r\n-I suggest you use Pixel Space unchecked (Off), since it is for an unrelated sort of object detection. \r\n-Before you click run, open the \"Environments\" tab (at the top of the window, next to the \"Parameters\" tab). Leave all the settings at their defaults there - except scroll down to the bottom of Parameters tab to the section called \"Processor Type\", pull down the Processor type pull-down, and choose GPU (it is set to CPU by default). \r\n-Then click Run and it will generate a new layer showing its detections as bounding boxes around target objects. You can see details for each detection by opening the Attribute Table on the new layer. \r\n\r\nYou can see a screen capture of such a configuration in the image called Q200DemoScreenCap.jpg, included in this site's dataset, showing a detection on G-LiHT transect Yucatan_South_GLAS_395 near Pixoyal, with a detection of a Maya staircase highlighted on the display, and its corresponding information highlighted in the Attribute Table for it."],"license_tesim":["http://creativecommons.org/publicdomain/mark/1.0/"],"date_created_tesim":["March 15, 2025"],"related_url_tesim":["https://glihtdata.gsfc.nasa.gov/","https://benbritton.net"],"thumbnail_path_ss":"/assets/work-ff055336041c3f7d310ad69109eda4a887b16ec501f35afc0a547c4adb97ee72.png","suppressed_bsi":false,"actionable_workflow_roles_ssim":["admin_set/default-default-depositing"],"workflow_state_name_ssim":["deposited"],"member_ids_ssim":["79407z82r","mw22v7069","qn59q5747"],"file_set_ids_ssim":["79407z82r","mw22v7069","qn59q5747"],"visibility_ssi":"open","admin_set_tesim":["Default Admin Set"],"sort_title_ssi":"Q2000 DEEP LEARNING MODEL PACKAGE","human_readable_type_tesim":["Dataset"],"read_access_group_ssim":["public"],"edit_access_group_ssim":["admin"],"edit_access_person_ssim":["brittobj@ucmail.uc.edu"],"nesting_collection__pathnames_ssim":["3f462716x"],"nesting_collection__deepest_nested_depth_isi":1,"_version_":1848689773571473408,"timestamp":"2025-11-13T15:32:36.058Z","score":0.00049999997}],"facets":[{"name":"human_readable_type_sim","items":[{"value":"Dataset","hits":1,"label":"Dataset"}],"label":"Human Readable Type Sim"},{"name":"creator_sim","items":[{"value":"Britton, Benjamin","hits":1,"label":"Britton, Benjamin"}],"label":"Creator Sim"},{"name":"subject_sim","items":[{"value":"Art, Geography, Anthropology, Archaeology, Computer Science","hits":1,"label":"Art, Geography, Anthropology, Archaeology, Computer Science"}],"label":"Subject Sim"},{"name":"college_sim","items":[{"value":"Arts and Sciences","hits":1,"label":"Arts and Sciences"}],"label":"College Sim"},{"name":"department_sim","items":[{"value":"Geography, Anthropology","hits":1,"label":"Geography, Anthropology"}],"label":"Department Sim"},{"name":"language_sim","items":[{"value":"English","hits":1,"label":"English"}],"label":"Language Sim"},{"name":"publisher_sim","items":[{"value":"Benjamin Britton","hits":1,"label":"Benjamin Britton"}],"label":"Publisher Sim"},{"name":"date_created_sim","items":[{"value":"March 15, 2025","hits":1,"label":"March 15, 2025"}],"label":"Date Created Sim"},{"name":"member_of_collection_ids_ssim","items":[],"label":"Member Of Collection Ids Ssim"},{"name":"generic_type_sim","items":[{"value":"Work","hits":1,"label":"Work"}],"label":"Generic Type Sim"}],"pages":{"current_page":1,"next_page":null,"prev_page":null,"total_pages":1,"limit_value":10,"offset_value":0,"total_count":1,"first_page?":true,"last_page?":true}}}