NASA Logo, National Aeronautics and Space Administration

National Aeronautics and Space Administration

Goddard Space Flight Center

NASA Sensor Web Experiments

Nasa Sensor Webs

Two Columns

SensorWeb Components

For more information contact:Michael Flick
EO-1 Technology Transfer Manager
EO-1 Mission Office
NASA Goddard Space Flight Center
Greenbelt, MD 20771
Phone: 301-286-8146
Fax: 301-286-2840
E-Mail: Michael Flick

Over the past 11 years, our team has developed a set of SensorWeb software tools (tables below) that enable key functional capability. These tools represent key building blocks that enable users to build a SensorWeb like a Lego kit whereby users can add sensors that produce data and elements that produce data products to the SensorWeb. Note that JPL partnered with us and that some components were built at each center. Also, that many of the components are based on the Open Geospatial Consortium (OGC) Sensor Web Enablement (SWE) standards.

Basic SensorWeb Architecture

GSFC SensorWeb Components (Ground)

JPL SensorWeb Components (Ground)

IPM SensorWeb Internal SW Components (Onboard)

The first thing we wanted to do was to enable scientists and the public to access satellite products without requiring a software team to develop products and an operation team to task the satellite or satellites for the required images.  Furthermore, we wanted the satellites to be able to trigger themselves on some occasions, via onboard detections or triggered via other sensors.  We built a “do-it-yourself” tool box for our satellite, Earth Observing 1 (EO-1).  Anyone can point at a mapat our website and task EO-1 for imagery and then automatically process and receive selected data products from our cloud. As an example, figure 1 shows an image of a toxic sludge leak in Hungaru in 2010, that was   triggered by an external user, who then enhanced the image and then was downloaded by thousands of media outlets worldwide without the knowledge of the EO-1 operations team.

Ajkai

On October 4, 2010, an accident occurred at the Ajkai Timföldgyár alumina (aluminum oxide) plant in western Hungary, (see above). A corner wall of a waste-retaining pond broke, releasing a torrent of toxic red sludge down a local stream. Several nearby towns were inundated, including Kolontar and Devecser, where the sludge was 2 meters (6.5 feet) deep in places. Four people were killed immediately, likely from drowning, and several more were missing. Dozens of residents were hospitalized for chemical burns.

Our latest effort will begin later in 2015 in an anther award from ESTO AIST-14 in which we will integrate spectrometers and imaging spectrometers onto a mini-quadricopter unmanned aerial system (UAS) and fly this system over agricultural fields, forest land and grasslands in Wisconsin.  Agriculture, Forests, Grasslands  and looking at chemistry and photosynthetic capacity, diversity, disease, GxE (genes by environment drivers of variation) and other items.  The idea is to use onboard intelligent image aided navigation to optimize our measurements and obtain science grade data with UAS sysFtems.   This means that we will use high performance onboard computing to decide where to fly the UAS to optimize our measurement given that the UAS has about 30 minutes of flight time and thus is a limited resource with very high resolution.   Figure 2 depicts our most recent objective to fulfill a specific science need to inter-compare science data at various scales as close to simultaneously as possible, but with limited sensor resources.

Synoptic

Hyperspectral The above diagrams developed by Rob Sohlberg at the University of Maryland College Park, Department of Geography, for the AIST-14 UAS Research activity, the various needs to obtain data at different scaling creates complexity is partially offset by a high performance Intelligent Payload Module (IPM) which enables intelligent gathering of data/images using an optimization strategy developed by a scientist and in conjunction with performing data processing onboard which previously was not possible.

The concepts developed under this research effort and other past efforts will seamlessly translate to other areas of studies that we currently monitor such as volcanos, floods, and landslides. Better observations by the SensorWebs will enable more accurate modeling and predictions for phenomena such as floods, for which we are currently experimenting, droughts, fires, food security and water borne disease.

Q. What are the major problems or challenges for SensorWeb?

Even with the great strides forward with onboard processing and improved performance communication links, science and application requirements easily outstrip coverage, spectral resolution, spatial resolution requirements for the foreseeable future. For example, upcoming NASA Decadal missions that survey the Earth’s surface, such as the HyspIRI mission which will survey land surfaces with an imaging spectrometer will generate between 0.6 and 1.2 Gbps of data. Furthermore, depending on which operations concept is chosen, terabits of data are generated every orbit or two. So the sheer volume of future sensors create similar issues onboard as on the ground with “Big Data”. This is paired with the issue of limited power to drive the onboard processors. To date, the key solution being implemented is the use of Field Programmable Gate Arrays (FPGAs) which provide an order of magnitude or two better performance onboard using only a few watts of power compared to existing onboard CPUs, albeit with less programming flexibility. Thus one near term challenge is to provide flexible software Application Programming Interface (API) that enable users to synthesize FPGA circuits as though they were software programs. For the long term outlook, nanotechnology, such as graphene, shows promise in developing multifunctional nanosatellite components that use an order of magnitude lower power, with significant performance increase for processing, sensing and navigation and also can serve as super strong structural material. Thus enabling capable nanosat systems that can integrate into the overall SensorWeb architectural concept.

Q. What is in the future for the SensorWeb program?

Figure 2 depicts the ultimate SensorWeb goal which is to obtain science grade data at all scales and with total coverage of the Earth. Obviously this is not practical. So the next best thing is intelligent gathering of key data that fills in observation holes. This means being able to make decisions based on processing relatively high volumes of data onboard, but with low power. Presently, we are experimenting with multicore parallel processing augmented with FPGA circuits to process selected images/data in seconds rather than minutes or hours at less than 20 watts onboard. But even using only 20 watts for processing makes Nanosats not viable for the sensors at high data rates when one considers the power that small solar arrays can deliver. For our present AIST-11 effort in which we are developing some Intelligent Payload Module (IPM) onboard processing metrics, we are trying to determine for selected observation scenarios, how much data we can process onboard given satellite and UAS power limitations. Nanotechnology, such as the use of the 2 dimensional material graphene might enable transistors and ultimately processors that can perform at 300 gigahertz but at very low power and weight relative to today’s technology. In addition, the technology points to the possibility of super efficient batteries and solar arrays. Finally, nanotechnology points to the possibility of smaller, more efficient microelectronic machine systems (MEMS) inertial measurement units (IMU). . The vision of “smoke detectors in the sky” becomes viable with super light materials that only require minimal power.

Q. What needs to happen to ensure the next generation of SensorWeb technology?

The next generation of SensorWeb technology is driven by a few key factors. The first is the need to reduce cost of acquiring science and applications data. A second driver is coverage requirements which includes spatial, spectral and temporal coverage which mostly results in increasing data volumes and data rates. A third driver is the corresponding technology functionality and performance progress. I think that in the SensorWeb domain, it will follow the Field of Dreams paradigm which is “Build it and they will come” subject to the constraint that a certain threshold of functionality is met. For example, there is a threshold that would enable the building of a nanosat given the power, mass and volume constraints. So reducing the power for an algorithm running on an onboard computer by 50% may not be enough to integrate it on a nanosat form factor .

Q. What role can social media play in SensorWeb?

One of the paradigm shifts that we investigated was the use of social media to enable interoperability within a SensorWeb. Our team helped to develop some international standards for use with SensorWebs in collaboration with the Open Geospatial Consortium (OGC) which has hundreds of large international partners. It was based on developing standard SOAP/WSDL API so that the internal data process was hidden from an external user and the interface to any participating sensor is abstracted and accessible on the Internet. But we discovered that for 80% of users who are in the ordinary public or are not in a computer related field, it was still too difficult to access satellite sensor data and products. Thus, one of our team members, Pat Cappleare, Vightel Inc., developed the concept of the GeoSocial API, which allows users to find, request and share satellite data and products via social media and other nearly free tools on the Internet. Products can be dropped onto maps and reaggregated into new products without much technical knowledge. This leverages the sharing functionality built by others around the world to share sensor data in a much easier way. This enables users to easily share key data with a community of people of similar interest, for example flood disaster responders and use Facebook, Twitter and Github.

Q. How can NASA Goddard use its growing expertise with cloud-based services in relation to SensorWeb?

We’re looking toward the use of crowd-sourcing capabilities, so that as our data from satellites is uploaded to the cloud, products are automatically produced and shared with a community of common interests. Furthermore, the community can augment the data from the ground using their phones. For example, when mapping a flood, the satellite may map a flood from space with coarse resolution. People on the ground can provide additional observation using the GPS in their phone to validate water locations and thus improve the inundation map in a shared environment on cloud. Vectors and points can be color coded to indentify the source. This is a good way to perform calibration and validation of satellite data to see how well it actually detects a certain phenomena or enables an algorithm builder to adjust the data processing algorithm via a calibration against ground observation. One can imagine that if a cloud is collecting the various sensor data as depicted in Figure 2, in a multiscale measurement SensorWeb, and then storing the data in an online geoserver based free cloud based database such as PostGRES, a resource can be provided for crowd sourcing that becomes a very powerful tool. We want to improve modeling via this approach and make it cost effective. This becomes a composite information system—imagine SensorWeb that’s small and yet combines people, contellations of NanoSats, UAS’s hovering near the ground, devices (iPods, tables, etc.), and small sensors enabled by APIs that can deliver data through social media. Furthermore, one function that we have operational in our cloud is called a Web Coverage Processing Service (WCPS) which allows users to invent algorithms to process the sensor data either on the ground or on board without having to program anything. Taking that one step further, we are developing the concept of using the cloud to upload key map features to perform selected algorithms. For example, we have the Landsat Global Land System 2000 stored in our cloud. We are investigating the idea of loading a UAS, airplane or satellite in realtime with a portion of the Landsat GLS for the purpose of co-registering/positioning imaging spectrometer data on a map. This is a different method to perform ortho-rectification but involves communication between a flying sensor and a ground cloud

Q. Speaking of trends, what do you see making the biggest impact on SensorWeb?

Autonomy is one key function that I did not mentioned yet. We have autonomy software running on EO-1 which enables it to make independent decision of what images it will take. This software that was developed by JPL under the NASA New Millennium Program with Steve Chien as the lead. Furthermore, members of our team have also done experiments with a distributed architecture of intelligent agent software modules. Thus a key step is to integrate autonomy software into a SensorWeb which follows what already has been done. The next milestone where we hope to make some strides is the UAS experiments that start this year and enabling the UAS to make its own decisions as to where it makes measurements based on what it sees with its onboard processing. Of course, in the satellite domain, this type of functionality has been demonstrated using the EO-1 satellite.

Q. We’ve covered the past, present, and future of SensorWeb. Is there anything left to tell?

There are so many ways to tell this story, and it’s very much still being written. Space based FPGAs are moving us in the right direction. Nanotechnology points to the possibility of superlight and high performance nanosats. AI points us toward distributed intelligence to dramatically automate networks of sensors and improve our ability to discover fleeting scientific phenomena. I like to use a basketball metaphor: if you’re a really good player, you can close your eyes and still make a shot. Your internal senses and intelligence can fill in the holes where the senses are lacking. Your brain acts like a holographic image, using data fusion that is so well integrated that the player can still make the shot. SensorWeb evolution can act the same way enabling the filling of holes in models and enabling better decision support despite lack of total spectral, spatial and temporal coverage.