How AI is revolutionizing satellite imagery for a better view of our planet


Posted on February 6, 2025  /  1 Comments

As of 2025 DAY 33, the satellite tracking website “Orbiting Now” lists 11,559 active satellites in various Earth orbits, each with missions including communications, Earth observation, technology development, navigation, space science, etc. These satellites provide us with an unprecedented view of Earth, enabling real-time monitoring and granular data collection. From tracking deforestation in the Amazon to monitoring agriculture yields across different continents, these ‘eyes in the sky’ generate a massive volume of data that gives valuable information about our planet’s health and human activities that have changed the natural earth’s surface.

However, this sheer volume and complexity of such data lead us to an exciting question: How can we harness artificial intelligence (AI) to make sense of this vast array of satellite imagery? Let’s explore this intersection of satellite imagery and AI, by understanding the unique characteristics of satellite data, the challenges in processing it, and the potential solutions with deep learning.

Example of satellite imagery analysis showing how raw satellite data for Sri Lanka is processed and transformed into actionable insights through deep learning techniques. The definitions are derived by a team from New York University jointly with UN-Habitat (https://unhabitat.org/sites/default/files/2020/06/city_definition_what_is_a_city.pdf)

The unique nature of satellite imagery

Satellite images are distinct from photographs taken from a conventional camera in many ways. First, their resolution is typically much higher, with significantly more pixels per image. While a standard photograph may have millions of pixels, a satellite image often contains much more, to capture fine details over large areas. At the same time, a typical photograph contains only three channels for red, green, and blue colors (RGB), while satellite images have an average of 8 to 12 to capture data beyond the visible spectrum including infrared, ultraviolet, thermal, and other spectral bands. This allows for calculating extremely useful ground information at each pixel level such as the health of vegetation, the quality of a water body, atmospheric conditions, and other factors invisible to the naked eye.

Moreover, satellite images are geo-referenced, where each pixel in the image has associated coordinates that pinpoint its exact location on Earth. The resolution of these images refers to the Earth’s surface area covered by each pixel. For instance, if a satellite image has a 30-meter ground resolution, each pixel represents a 30×30-meter area on Earth’s surface. Additionally, the metadata of these images provides essential information such as sun angle, atmospheric conditions, and satellite position.

Challenges in processing satellite imagery

Processing these complex image systems are extremely challenging for traditional analysis methods. Atmospheric conditions such as clouds, haze, and smog can significantly hinder the data quality. The multi-spectral nature of each pixel requires algorithms that can effectively handle multiple dimensions. At the same time, the images taken at a location can show significant variations in band values across different times of the year due to seasonal, lighting, and atmospheric changes. These temporal changes must be taken into consideration and normalized for consistent analysis.

Furthermore, the geometric distortions present another challenge, the images often get distorted due to the Earth’s curvature, satellite motion, and terrain elevation. These need to be corrected through geometric transformation techniques.

The deep learning revolution

Classical computer vision methods, which mostly work on hard-coded features and rule-based algorithms, face limitations in feature matching due to spectral distortions between images, long baselines, and wide intersection angles. In recent years, deep learning has emerged as a game-changer in satellite imagery analysis. Convolutional Neural Networks (CNNs), which is a regularized type of feed-forward deep learning network that learns features by itself via filter or kernel optimization, have shown impressive results and consistent progress over handcrafted methods in applications such as building digital surface models, object detection, land cover classification, and semantic segmentation. At the same time, learning-based models have shown effective results on benchmarks addressing correspondence problems between images with significant differences in scale, illumination, and calorimetry. Their ability to automatically learn hierarchical features from the images while putting an end to the need for manual feature engineering makes them well-suitable for complex visual systems like satellite imagery data.

For instance, the application of deep learning in agriculture has demonstrated significant potential in soil health and crop yield prediction. Soil health has a direct impact on all surface vegetation, it can be further broken down to measuring soil moisture, soil nutrients, and soil salinity. Calculating ground-level soil moisture is useful for understanding hydrologic processes, vegetation states, and climatic conditions. Active microwave imaging from satellites are often used to calculate and predict soil moisture because the dielectric property of soil changes with moisture for microwaves, which pinpoint areas with high moisture more clearly. Passed studies have found satellite imagery with relatively fine resolution (10 m) with CNNs has demonstrated superior performance and accuracy over other Machine learning approaches. The correct balance of soil nutrients is essential for healthy vegetation, but this is not easy to calculate and predict with remote sensing. In the presence of any vegetation, this can be correlated with remote sensing only when it causes a visual change. For instance, lack of nitrogen leads to poor plant growth.

The road ahead

As satellite technology continues to advance, the volume and availability of data will only grow. Integrating AI, particularly deep learning, into satellite imagery analysis is not just a trend but a necessity. Future developments will likely focus on improving model efficiency, reducing the need for labeled data through unsupervised learning, and enhancing real-time processing capabilities.

References

Albanwan, H., Qin, R.: A comparative study on deep-learning methods for dense image matching of multi-angle and multi-date remote sensing stereo-images. The Photogrammetric Record 37(180), 385–409 (2022). https://doi.org/10.1111/phor.12430

Bosch, M., Foster, K., Christie, G., Wang, S., Hager, G.D., Brown, M.: Semantic stereo for incidental satellite images. In: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV). pp. 1524–1532 (2019). https://doi.org/10.1109/WACV.2019.00167

Song, Shuang, et al. “Deep Learning Meets Satellite Images–An Evaluation on Handcrafted and Learning-based Features for Multi-date Satellite Stereo Images.” arXiv preprint arXiv:2409.02825 (2024).

Neupane, Bipul, Teerayut Horanont, and Jagannath Aryal. “Deep learning-based semantic segmentation of urban features in satellite images: A review and meta-analysis.” Remote Sensing 13.4 (2021): 808.

Basu, Saikat, et al. “Deepsat: a learning framework for satellite imagery.” Proceedings of the 23rd SIGSPATIAL international conference on advances in geographic information systems. 2015.

Authored by Chanuka Algama, Researcher – Data, Algorithms and Policy Team, LIRNEasia

1 Comment


  1. Great work!