Funded Projects

ID Name Period
STC-19-00 Advancing Spatiotemporal Studies to Enable 21st Century Sciences and Applications

Many 21st century challenges to contemporary society, such as natural disasters, happen in both space and time, and require spatiotemporal principles and thinking be incorporated into computing process. A systematic investigation of the principles would advance human knowledge by providing trailblazing methodologies to explore the next generation computing models for addressing the challenges. This overarching center project is to analyze and collaborate with international leaders, and the science advisory committee to generalize spatiotemporal thinking methodologies, produce efficient computing software and tools, elevate the application impact, and advance human knowledge and intelligence. Objectives are to: a) build spatiotemporal infrastructure from theoretical, technological and application aspects, b) innovate the spatiotemporal studies with new tools, systems, and applications, c) educate K-16 and graduate students with proper texts and curriculum, d) develop a community for spatiotemporal studies from center sites, members, regional, national to global level through IAB meetings, symposium, and other venues.

2019-2020
STC-19-01 Innovating a Computing Infrastructure for Spatiotemporal Studies 

In phase I, the spatiotemporal innovation center built a 500 computing nodes cloud facility, which enables most projects of the center. After 5 years operation of the infrastructure, we see a need for an upgrade infrastructure with more RAM, faster CPU speed, more storage on each node and hopefully with a GPU cluster that can help us to address the growing challenges on image/graphics processing and deep learning in the phase II operation. Based on our IAB’s recommendation and center projects, we propose to a) develop and maintain an upgraded computing infrastructure with more computing power and graphics, deep learning capabilities, b) provide spatiotemporal computing coordination and research to all center projects with computing needs by maintaining a highly capable research staff support to optimize the computing infrastructure, c) serve the campus needs of computing with spatiotemporal interest to gain broader impact and engagement of scientists and students, and d) adopt and develop advanced spatiotemporal computing technologies to innovate the next generation computing tools.

2019-2020
STC-19-02 Spatiotemporal Innovation Testbed

The first phase of the Spatiotemproal I/UCRC has witnessed many spatiotemporal innovations in the past six years. Like innovations in any other domain and technology area, spatiotemporal innovation also takes the hype cycle of maturity and many of them emerge in recent years. The community needs a comprehensive information source on what, when, where, and how much efforts are needed for maturing, adopting, and operating the new innovations. To reduce the high illusion of innovation hype cycle and meet this community need, we propose to establish a testbed utilizing the center’s infrastructure as part of the spatiotemporal infrastructure envisioned for the center to implement in its 15 years of investigation. This project will draw best practices from past investigations such as the computing infrastructure, big data testbed, cloud testbed, EarthCube testbed, ESIP testbed to maintain and automate a testbed environment. The testbed will serve as a platform for members, faculty, students, and the community to validate, verify new technologies & emerging innovations, and to produce white papers, review papers, evaluation publications for the broadest impacts.

2019-2020
Hvd-19-01 China Data Lab: Developing an online spatial data sharing and management platform

This project is to develop an open source based online platform for spatial data discovery, sharing and management, specialized on China data. The platform will allow researchers to share their spatial data with others online, conduct spatial data analysis with geospatial and statistical tools on the cloud, develop data-driven case studies, and share data and results as a package with others. The platform will also support case-based training programs on spatiotemporal data analysis for economic, social, public health, urban planning and other research subjects, focused on China.

2019-2020
Hvd-19-06 Collaboration Between RMDS and Dataverse

Dataverse is an open source online data repository platform where users can share, preserve, cite, explore, and analyze research data. RMDS Lab is a startup company developing transformative technologies for research with big data and AI. This project is to establish a collaboration between the two teams and two platforms to create synergy that will advance the shared goal of supporting worldwide scholars in data-driven research. The main objective of this project is to explore solutions to apply AI technology in evaluating data science studies, provide measurable references for data scientists on the accuracy, impactfulness, replicability, applicability, and other merit scores of data science study cases; and to promote high-quality data science research through platform development, data sharing, community building, and user training.

2019-2020
STC-15-02 Dynamic Mapping of Secondary Cities

Secondary Cities are non-primary cities, characterized by population size, function and/or economic status.  They are urban centers of governance, logistics, and production and are often data poor. This project is a global initiative to address critical geospatial data needs of secondary cities. The objective is to enhance emergency preparedness, human security and resilience. The project facilitates partnership with local organizations for data generation and sharing, using open source tools, and focuses on applied geography – human geography thematic areas.

2019-2020
GMU-19-01 Cloud classification

Cloud types, coverage and distribution have significant influence on the characteristic and dynamic of global climate. They are directly related to the energy balance of the earth. Therefore, accurate cloud classification and analysis are essential for the research of atmosphere and climate change. Cloud classification assigns a predetermined label to cloud in the image, e.g., cirrus, altostratus and altocumulus. With cloud segmentation, satellite imagery can be utilized to support a series of local mesoscale climate analysis like rainy cloud detection, cyclone detection, or extreme weather event (e.g. heavy rainfall) predictions. However, it is a challenging task to distinguish different clouds from satellite imagery because of intraclass spectral variations and interclass spectral similarities.

Traditionally, cloud types are classified using selected features and threshold such as cloud-top pressure (CTP), cloud optical thickness (COT), brightness temperature (BT) and multilayer flag (MLF). One drawback is that the model accuracy heavily relies on threshold and feature selection. The past years have witnessed the successful deep learning applications in automatically feature selection for object detection from images with the aid of CNN model and its variants such as VGGNet, ResNet. Inspired by successful applications of deep learning in computer vision, we propose to implement an automatic cloud classification system based on deep neural network to identify the 8 kinds of cloud from geostationary and polar orbit satellite data, with cloud types from 2B-CLDCLASS product of CloudSat-CPR as the reference of label.

2019-2020
GMU-19-03 Planetary Defense 

Programs like NASA’s Near-Earth Object (NEO) Survey supply the PD community with the necessary information that can be utilized for NEO mitigation. However, information about detecting, characterizing and mitigating NEO threats is still dispersed throughout different organizations and scientists, due to the lack of structured architecture. This project is aimed to develop a knowledge discovery search engine to provide discovery and easy access to the PD related resources by developing 1) a domain-specific Web crawler to automate the large-scale up-to-date discovery of PD related resource, and 2) a search ranking method to better rank the search results. The Web crawler is based on Apache Nutch, one of the well-recognized highly scalable web crawler. In this research, Apache Nutch is extended in three aspects: 1) a semi-supervised approach is developed to create PD-related keyword list; 2) an improved similarity scoring function is utilized to set the priority of the web pages in the crawl frontier; and 3) an adaptive approach is designed to re-crawl/update web pages. The search ranking module is built upon Elasticsearch. Rather than using the basic search relevance function of Elasticsearch, a PageRank based link analysis and a LDA based topic modelling approach are developed to better support the ranking of interconnected web pages.

2019-2020
GMU-19-04 Micro-scale Urban Heat Island Spatiotemporal Analytics and Prediction Framework

As one of the adverse effects of urbanization and climate change, Urban Heat Island (UHI) can affect human health. Most researches have been relying on remote sensing imagery or sparsely distributed station sensor data and focusing on the broad understanding of the meso- or city- scale UHI phenomenon and mitigation support. However, challenges remain for the micro-level. This project aims to: 1) build an in-depth investigation of the human-weather-climate relations for the urban area; 2) fill the gap between short-term weather impact effects from buildings, traffics, human mobilities, and long-term microclimate from understanding such relations with real-time urban sensing (IoT) data; 3) establish a machine-learning enabled ensemble model for fast near-future temperature forecasts by considering the human-weather-climate relationships; 4) provide guideline for the precautionary local-human-activity management strategy design and implementation according to the forecasts to reduce public health-related risks, allowing better urban living spaces.

2019-2020
GMU-18-01 Rapid extreme weather events detection and tracking from 4D/5D climate simulations

Climate simulations provide valuable information to represent the situations of the atmosphere, ocean and land. Increasingly advanced computational technologies and Earth observation capabilities have enabled the climate models to have higher spatial and temporal resolution, providing an ever realistic coverage of the Earth. The high spatiotemporal resolution also provides us the opportunity to more precisely pinpoint and identify/segment the occurrence of extreme weather events, such as tropical cyclones, which can have dramatic impacts on populations and economies. Deep learning techniques are considered as one of the breakthroughs in recent years, achieving compelling results on many practical tasks including disease diagnosis, facial recognition, autonomous driving. We propose to utilize deep learning techniques on the rapid detection of two extreme weather events: tropical cyclones and dust storms. Deep learning models trained on past climate simulations will inform the effectiveness of the approach on future simulations. Our technological motivation is that currently high-resolution simulations and observations have been generating too much data for researchers, scientists, and organizations to store for their applications. Machine learning methods performing real-time segmentation and classification of relevant features for extreme weather events can generate such list or database storing these features, and detailed information can be obtained by rerunning the simulation with high spatiotemporal data when needed.

2018-2019
GMU-18-02 Climate Indicators downscaling

Weather condition has become one of the most essential factors that people concern about in their daily life. People may want to check the weather forecast every day even every several hours especially in some activities that very sensitive to temperature, precipitation or winds, for example, taking flights, etc. But nowadays, civil weather forecasts data are issued every six hours, which is far insufficient to the actual needs. And the spatial resolutions of most weather data such as precipitation and surface winds are around several kilometers which are too coarse for some regions. This project will focus on weather data downscaling to fulfill the increasing needs for short term forecast with high spatial and temporal resolutions.

2018-2019
STC-17-01 Big Data Learning Platform

The objective is investigating and advancing the technology of a Deep Learning system to integrate big data, learn hidden knowledge, and discover new information of significance to human dynamics, Earth environment, and space detection.
The expect result is to 1) integrate a Deep Learning System based on Hybrid Cloud Computing for Geospatial Intelligence with SOA Algorithm/Models supported. 2) An advanced big data contained system will be built to manage, fast access various types of data. 3) A number of scenarios including weather, events, new information, and space detection will be tested on the system developed. 4) Quality of new information discovered will be checked with a knowledge base and factual information.

GMU-17-04 Automatic Near-Real-Time Flood Detection using Suomi-NPP/VIIRS Data

Flood detection software has been developed to generate near real-time flood products from VIIRS imagery. SNPP/JPSS VIIRS data show special advantages in flood detection. The major activities and accomplishments specific objectives in the reporting period is Hurricane Harvey generated Flooding application. The plan for next reporting period is to 1) improvement current flood product, 2) develop 3-D flood parameters: flood water surface level, flood water depth, high resolution flood maps and 3) Further analysis on regional flood patterns.

STC-16-03 Big Data Deep Learning

Big Data emerged with unprecedented values for research, development, innovation and business, and most of them have a spatiotemporal stamp. However, the transformation of Big Data into value poses grand challenges for big data management, spatiotemporal data modeling, and spatiotemporal data mining. To enable such transformation, we propose to develop a deep learning platform based on the spatiotemporal innovation current project. The platform will have advanced data management and computing technologies to mine valuable knowledge from Big Spatiotemporal Data. More robust models will be built to discover the implicit spatiotemporal dynamic patterns in climate, dust storm, and weather with remote sensing and model simulation data to solve the concerned environment and health issues. Meanwhile, user-generated data, such as PO.DAAC and social media, will be mined to improve geospatial data discovering and form a knowledge base for spatiotemporal data. In addition, high performance computing (e.g. GPU and parallel computing) and cloud computing technologies will be utilized to accelerate the knowledge discovering process. The proposed deep learning platform for Big Spatiotemporal Data will develop/integrate a suite of software for big spatiotemporal data mining, and contribute a core to spatiotemporal innovation

GMU-16-05 Data Container Study for Handling array-based data using Rasdaman, SciDB, Hive, Spark, and MongoDB

Geoscience communities have come up with various big data storage solutions, such as Rasdaman and Hive, to address the grand challenges for massive Earth observation data management and processing. To examine the readiness of current technologies and tools in supporting big Earth observation data archive, discovery, access, and processing, we investigated and compared several popular data solutions, including Rasdaman, SciDB, Hive, Spark, CliamteSpark, and MongoDB. Using different types of spatial and non-spatial queries, and datasets stored in common scientific data formats (e.g., NetCDF and HDF), the feature and performance of these data containers are systematically compared and evaluated. The evaluation metrics focus on their performance related to discover and access datasets for upper level geoscience applications. The computing resources (e.g. CPU, memory, hard drive, network) consumed while performing various queries and operations are monitored and recorded for the performance evaluation. The initial results show that 1) MongoDB has the best performance for queries on statistical and operational functions, but does not support NetCDF data format better than HDF; 2) ClimateSpark has better performance than the pure Spark and Hive in most cases, except the single point extraction in the long time series; and 3) Hive is not good at querying small datasets since it uses MapReduce as the processing engine with a lot of overhead. A comprehensive report will detail the experimental results, and compare their pros and cons regarding system performance, ease of use, accessibility, scalability, compatibility, and flexibility.

2015-2016
STC-15-01 Developing and Maintaining a Spatiotemporal Computing Infrastructure Project Space

Take the demands from our IAB and center projects, this project is to a) develop and maintain a spatiotemporal computing infrastructure by acquiring a high performance computing facility, b) provide spatiotemporal computing coordination and research to all center projects with computing needs by maintaining a highly capable research staff support to optimize the computing infrastructure, c) adopt and develop advanced spatiotemporal computing technologies to innovate the next generation computing tools.

2015-2020
STC-15-02 Mapping Secondary Cities for Resiliency and Emergency Preparedness

Mapping Secondary Cities for Resiliency and Emergency Preparedness is a project designed to build local capacity in using geospatial science and technology to create data in support of emergency preparedness. Globally, secondary cities are unique environments that are experiencing rapid urbanization and have generally been poorly mapped, yet mapping these cities is an essential activity in building resiliency, planning and managing urban growth, and devising robust emergency management plans. The project will identify countries with rapidly growing secondary cities where governments have recognized their challenges and are looking to develop policies and programs to foster manageable growth and development. Projects will be implemented in secondary cities-typically non-primary cities with a population ranging from 100,000 to 5 million. The specific emphasis for this project is focused on generating geospatial data on infrastructure for emergency preparedness, a key need in the development of secondary cities. A pilot project for Cusco, Peru is proposed to build an open source, scale-able geodataset through enhancing local expertise and facilitating alignment between communities, governments and agencies through the establishment of long term partnerships and networks. Projects teams will be comprised of local university academics and students with a focus in geospatial science, regional non-governmental organizations, and partners from local municipalities. Training programs will be designed to build capacity around generating geospatial data over the long-term from local knowledge, commercial satellite imagery, and other geographic tools. Project teams will coordinate data collection efforts targeted at city priorities that may include essential data for emergency management (e.g., building footprints, roads, river networks, city infrastructure), environmental monitoring (e.g., rivers health, water quality, open space and parks, ecosystem services), or urban planning (e.g., informal settlements, sanitation, water treatment, zoning). These datasets provide the basis for long term planning across multiple sectors. The training program will develop on-site, in-country expertise for their specific local needs laying the groundwork for follow up “train the trainer” program in other secondary cities. An overview and hands-on instruction will provided using geographic tools grounded in sound geospatial information science that include cartographic best practices, geospatial analysis, database management, and field data collection techniques. Preliminary datasets will be stored and shared using both Windows and Open Source platforms using a hybrid approach for data sharing and dissemination.

2015-2020
GMU-15-01 ClimateSpark: An In-memory Distributed Computing Framework for Big Climate Data Analytics Project Space

Large collections of observational, reanalysis, and climate model output data are being assembled as part of the work of the Intergovernmental Panel on Climate Change (IPCC). These collections may grow to as large as a 100 PB in the coming years. The NASA Center for Climate Simulation (NCCS) will host much of this data. Ideally, such big data can be provided to scientists with on-demand analytical and simulation capabilities to relieve them from time-consuming computational tasks. However, it is challenging to realize this goal, because processing such big data requires efficient big data management strategies, complex parallel computing algorithms, and scalable computing resources. Based on the extensive experience at NCCS and GMU in big climate data analytics, Hadoop, cloud computing, and other technologies, a high-performance computing framework, ClimateSpark, has been developed to better support big climate data analytics. A hierarchical indexing strategy has been designed and implemented to support efficient big multi-dimensional climate data management and query in a scalable environment. The high-performance Taylor-Diagram service has been developed as a tool to help climatologists evaluate different climate model outputs. A web portal has been developed to ease the remote interaction between users, data, analytic operations, and computing resources by using SQL, scala/python notebook, or RESTful API.

2015-2016
GMU-15-08 Automatic Near-Real- TimeFlood Detection using Suomi-NPP/VIIRS Data

Near real-time satellite-derived flood maps are invaluable to river forecasters and decision-makers for disaster monitoring and relief efforts. With the support from the JPSS Proving ground and Risk Reduction Program, a flood detection package has been developed using SNPP/VIIRS (Suomi National Polar-orbiting Partnership/ Visible Infrared Imaging Radiometer Suite) imagery to generate daily near real-time flood maps automatically for National Weather Service (NWS)-River Forecast Centers (RFC) in the USA. In this package, a series of algorithms have been developed including water detection, cloud shadow removal, terrain shadow removal, minor flood detection, water fraction retrieval and flooding water determination. The package has been running routinely with the direct broadcast SNPP/VIIRS data since 2014. Flood maps were carefully evaluated by river forecasters using airborne imagery and hydraulic observations. Offline validation was also made via visual inspection with VIIRS false-color composite images on more than 10,000 granules across a variety of scenes and comparison with river gauge observations year-round. Evaluation of the product has shown high accuracy and promising performance of the product has won positive feedback and recognition from end-users.

2015-2016
GMU-15-09 Planetary Defense Project Space

Programs like NASA’s Near-Earth Object (NEO) Survey supply the PD community with the necessary information that can be utilized for NEO mitigation. However, information about detecting, characterizing and mitigating NEO threats is still dispersed throughout different organizations and scientists, due to the lack of structured architecture. This project is aimed to develop a knowledge discovery search engine to provide discovery and easy access to the PD related resources by developing 1) a domain-specific Web crawler to automate the large-scale up-to-date discovery of PD related resource, and 2) a search ranking method to better rank the search results. The Web crawler is based on Apache Nutch, one of the well-recognized highly scalable web crawler. In this research, Apache Nutch is extended in three aspects: 1) a semi-supervised approach is developed to create PD-related keyword list; 2) an improved similarity scoring function is utilized to set the priority of the web pages in the crawl frontier; and 3) an adaptive approach is designed to re-crawl/update web pages. The search ranking module is built upon Elasticsearch. Rather than using the basic search relevance function of Elasticsearch, a PageRank based link analysis and a LDA based topic modelling approach are developed to better support the ranking of interconnected web pages.

2015-2016
UCSB-15-01 Linked Data for the National Map

The proposed project aims at providing Linked Data access to National Map vector data which resides in the ArcGIS Geodatabase format. These data include hydrography, transportation, structures, and boundaries. The project will address the challenge of how to efficiently make large data volumes available and queryable at the same times. Previous research and the PIs experience suggest that in the context of the National Map, offering hundreds of Gigabyte of Linked Data via an unrestricted endpoint will not scale. To address this challenge a variety of methods will be tested to determine the sweet spot between data dumps, i.e., just storing huge RDF files for download, on the one side, and unrestricted public (Geo)SPARQL endpoints on the other side. Methods and combination of methods will include (Geo)SPARQL-SQL rewriting, transparent Web Service proxies for WFS, Linked Data Fragments, query optimization, restricted queries via a user interface, and so forth. The sweet spot will be defined as the method (or combination of methods) that enables common usage scenarios for Linked National Map Data, i.e., that is able to retain as much of the functionality that would be provided by having full Linked Data query access via a public endpoint while keeping server load and average query runtime (for common usage queries) at an acceptable level. A Web-based user interface will expose the resulting data and make them queryable and explorable via the follow-your-nose paradigm.

2015-2016
Harvard-14-03 Development and application of ontologies for NHD and related TNM data layers

Feature layers in the US National Map (TNM) are fundamental contexts for spatiotemporal data collection and analysis, but largely exist independent of each other as map layers. This project will explore the use of formal ontologies and semantic technology to represent functional relationships within and between “wet” hydrography and “dry” landscape layers to express the basis for occurrence of lakes and rivers. It will then test these representations in applications for discovering and analyzing related water science data.

2014-2016
Harvard-14-02 Developing a place name extraction methodology for historic maps

We propose to develop an approach for automating the extraction and organization of place name information from georeferenced historic map series (in multiple languages) and will focus on scales better than 250k. Such information essential to the spatialization of unstructured text documents and social media. Phase I will be a feasibility study which evaluates best existing technologies against current extraction costs (including outsourcing) and then recommends next steps for establishing a production system; options will be: 1) do nothing as there is currently no cost effective approach, 2) make use of an existing platform and develop work flows for it, 3) develop a new system which combines existing technologies and/or develops new technologies.

2014-2015
GMU-14-01 Improving geospatial data discovery, access, visualization and analysis for Data.gov, geospatial platform and other systems

Develop a set of efficient tools to better discover, access and visualize the data and services from Data.gov and Geospatial Platform to meet the following requirements:1) Support the discovery using enhanced semantic context and inferences to improve discovery recall and precision. 2) Provide and enhance an open-source viewer capability to visualize and analyze different online map services. 3) Develop an open-source analytical workbench prototype for incorporation into the Data.gov and Geospatial Platform to enable end-user computational analysis on multiple remote geospatial web services that can be captured as services for optional re-execution, resulting in analytical data products (data, graphs, maps) as a result of raster and vector overlay. 4) Supply a QoS module to check and track the service quality information

2014-2015
GMU-14-05 Developing a Hadoop-based middleware for handling multi-dimensional NetCDF

Climate observations and model simulations are producing vast amounts of array-based spatiotemporal data. Efficient process of these data is essential to global challenges such as climate change, natural disasters, diseases, and other emergencies. However, this is challenging not only because of the large data volume but also the intrinsic nature of high dimensionalities of climate data. To tackle this challenge, this paper proposes a Hadoop-based middleware to efficiently manage and process big climate data in a highly scalable environment. With this approach, big climate data are directly stored in Hadoop Distributed File System in its original format without any special configuration for the Hadoop cluster. A spatiotemporal index is built to bridge the logical array-based data model and the physical data layout, which enables fast data retrieve with spatiotemporal query. Based on the index, a data-partitioning algorithm is proposed to enable MapReduce to achieve high data locality and balanced workload. The proposed approach is evaluated using the NASA MERRA reanalysis climate data. The experiment results show that the Hadoop-based middleware can significantly accelerate the query and process (~10x speedup compared to the baseline test using the same cluster), while keeping the index-to-data ratio small (0.0328%). The applicability of the Hadoop-based middleware is demonstrated by a climate anomaly detection application deployed on the NASA Hadoop cluster.

2014-2015
GMU-14-07 Analyzing and visualizing data quality in crowdsourcing environments

Significant new influences in the geospatial domain include Web 2.0, social media, and user-centered technologies, as well as the generation and use of very large, dynamic datasets. These areas of influence present many new opportunities and challenges, and may require the development of a “new geospatial toolkit” to be used with new sources and types of geospatial data. A goal of our research is to help define this new set of tools, techniques, and strategies, and to explore approaches and practical perspectives for integrating new sources of data and knowledge into the geospatial domain. In this project, we developed web and mobile-based data collection prototypes to provide methods for characterizing and assessing data quality in crowdsourcing systems and novel ways to visualize data quality metrics. The different data collection ways, hybrid databases, and quality assessment methods for crowdsourced data serve as the cores in the prototype.

2014-2015
STC-14-01 Developing a big spatiotemporal data computing platform (continued by STC-15-01)

This research is to design and develop a general computing platform to best utilize cluster, grid, cloud, many integrated cores (MICs), graphics processing units (GPUs), GPUs/CPUs hybrid, and volunteer computing for accessing, processing, managing, analyzing, and visualizing big spatiotemporal data. Our developed computing and optimization techniques and heterogeneous resources integrator can support such a platform that can facilitate a number of applications, e.g., climate simulation, social media analyses, online visual analytics, geospatial platform, and GEOSS clearinghouse.

2014-2015
STC-14-00 Advancing Spatiotemporal Studies to Enable 21st Century Sciences and Applications

Many 21st century challenges to contemporary society, such as natural disasters, happen in both space and time, and require spatiotemporal principles and thinking be incorporated into computing process. A systematic investigation of the principles would advance human knowledge by providing trailblazing methodologies to explore the next generation computing models for addressing the challenges. This overarching center project is to analyze and collaborate with international leaders, and the science advisory committee to generalize spatiotemporal thinking methodologies, produce efficient computing software and tools, elevate the application impact, and advance human knowledge and intelligence. Objectives: a) Generalize spatiotemporal thinking methodologies. b) Produce efficient computing software and tools. c) Elevate the application impact, and. d) Advance human knowledge and intelligence.

2013-2020

* This page is under construction. We will have more information to be added.*

RECENT NEWS

November 7 – 8, 2019
2019 November IAB Meeting at George Mason University
read more …

Jun 19 – Jun 20, 2019
2019 June IAB Meeting at Harvard University
read more …

STC Renewed for the Next Five Years
read more …

Oct 10 – Oct 11, 2018
2018 October IAB Meeting at Harvard University
read more …

Oct 11, 2018
Next Generation GIS Workshop
read more …

May 31 – June 1, 2018
2018 May IAB Meeting at George Mason University
read more …

August 7-9, 2017
2nd International Symposium on Spatiotemporal Computing (ISSC), at Harvard University
read more …

Tuesday, May 17th, 2016-Wednesday, May 18th, 2016
IAB Meeting at George Mason University, Fairfax
read more …

March, 2016
STC now offers free hybrid cloud services
read more …