Twitter’s Anniversary Notification has reminded me that I put together a small satellite imagery management project, SinpoSatBot, six years ago to win an argument on Twitter. It seems like a good time to do a postmortem on whether it was successful (it was!), whether it was useful (it was not!), and whether I won the argument (I did! but I didn’t actually follow-up with whomever I was arguing with).
If this ends up being a useful article, I can follow-up with deeper dives and tutorials into exactly how to generate these numbers using the tools and datasets cited below.
SinpoSatBot
The SinpoSatBot was born out of a very straightforward question:
Can the DPRK see when remote sensing satellites are overhead, and time/hide/show-off their missile operations on their own terms?
For some readers, the answer is a strong and obvious “yes,” but let’s show our work. At the time, a few folks were arguing that the DPRK’s lack of sophisticated space-based and ground-based radar systems meant that it was totally at the mercy of U.S. remote sensing assets. This pushed me to develop a small proof of concept to show that I, Scott LaFoy, who also lacks sophisticated space-based and ground-based radar systems, could reliably predict the location and viewing areas of remote sensing assets.
Now technically what I proved is that with free software and a little bit of 2017-era home computing hardware, I could reliably predict commercial optical imaging coverage. My assumption that would be a dedicated nation-state actor could do a wee bit more.
The Bot Itself
SinpoSatBot was a multi-tool workflow that used publicly available satellite and sensor data plus STK (Systems Tool Kit, an AGI, now part of Ansys, simulation platform) to detect when the port of Sinpo was visible to commercial optical imaging satellites. At the time, the open-source community was excited over the DPRK’s new submarine launched ballistic missile program, and the movements of the SLBM TULP and new experimental Gorae ballistic missile submarine were closely examined, so Sinpo was a natural test case. In 2015/2016, there were several SLBM launches, but very minimal commercial satellite imagery showing preparations. Were there so few images that the missile operations weren’t being picked up? Or were the North Koreans exploiting predictable gaps in satellite coverage over Sinpo port?
The bot was conceptually simple. Tweet every time that Sinpo port was visible to commercial satellites. To do that, we need to know:
- What satellites have optical sensors on them (which we’ll be pulling from open source datasets and general industry knowledge)
- What the orbits of those satellites are (which we’ll be pulling from open-source TLE datasets and tool-provided datasets)
- What the capabilities of those sensors are (which we will be pulling from public-facing industry sources, a little bit of math, and some generous trial-and-error)
- What time it is now (which will be handled by the computer)
One of these is easy, two of these are pretty straightforward dataset assembly problems, and one is hard.
Step One: What satellites have optical sensors on them?
Since this was a proof of concept, this dataset was not exhaustive. In 2017, it was a solid percentage of the market, but in 2023, there is quite a bit of work that would need to be done to account for all the new birds up and old birds that have dropped.
I wanted to include traditional satellite constellations, such as SPOT, WORLDVIEW, Pleiades, and GeoEye, plus newer large constellations (Planet’s Dove flocks, as well as SKYSAT, which Planet bought from Google during the course of this project). This was a healthy mix of legacy systems, low-revist/high-resolution systems, and (at the time) emerging lower-resolution/high-revisit systems. Planet was really the driving force here, because their high-revisit rate posed a significantly higher challenge to any obfuscation attempts by the DPRK. It is hard to truly hide an entire ballistic missile test operation from hundreds of eyes in the sky.
The satellite list was built with publicly available datasets like the Union of Concerned Scientists Satellite Database, N2YO, OSCAR, and CELESTRAK, as well as the relevant corporate webpages of each constellation. Jonathan’s Space Report and Gunter’s Space Page were constant references throughout the whole process.
We have our list of satellites to plug into STK.
Step Two: Simulating the orbits.
The orbits for the satellites were generated through a combination of CELESTRAK and, mostly, STK’s own catalogue of satellite TLEs.
STK has an incredibly robust set of capabilities for open-source intelligence users. It is pretty GPU and processor heavy, but once you either have TLEs from a third-party assessor or from STK’s pre-built catalogue, the job basically switches from human analyst to computer propagation.
A note here: TLEs from third-party watchers would be the only way to track nation-state assets that are not necessarily provided in other datasets. I’m not going to cover that methodology here, but Jeffrey has talked about it pretty often on the Arms Control Wonk Podcast.
We can plug the TLEs or the satellites themselves into STK, and it will propagate out the orbits. This means we now can simulate satellite positions at any given time.
A Major Caveat: Sic Semper Tyrannis
Satellites maneuver. The Tyranny of Orbit means that an object on a purely ballistic trajectory will have a well defined, predictable orbit, but many satellites have at least a small amount of maneuverability built in for conjunction/collision avoidance. National security objects may have a large amount of maneuverability built in.
As such, TLEs need to be refreshed periodically. For SinpoSatBot, I refreshed TLEs every month. If the secrecy of my ballistic missile operations was on the line, I’d be refreshing far more often. The point is, though, that it is doable for commercial satellites (where maneuvers must be reported). For national security payloads, it would take careful watching and monitoring of the skies to derive independent, reliable TLEs.
Step Three: Sensor, Automation, and Processing.
The bot next needed to understand what the satellites can see. This one is hard, and in absence of precise information, can be replaced by risk-mitigation measures, if one is trying to dodge the satellite’s view.
Having their orbits simulated is vital: if we do not know where the satellite is at a given time, we can never establish what it could reasonably see. But having the sensors simulated is nearly as vital: if the North Koreans do not know what area a satellite can see, they would have to either take risks or assume very large viewing areas to protect their notional ballistic missile operation.
This step requires deriving two pieces of data: Field of View (FOV) and Field of Regard (FOR). For the purposes of this analysis, FOV is just what a sensor can see, and FOR is the area in which a sensor can be pointed and still return an accurate image. FOV comes from your eyeballs, FOR comes from your neck.
If we pretend we are the DPRK, trying to hide a ballistic missile operation, being in the FOV means that you’ve been seen. Being in the FOR means you could be seen. But since the DPRK may not be able to track the precise angle at which a satellite is pointing, the FOR can be just as risky for exposure as the FOV.
Calculating FOV and FOR isn’t particularly easy, and for this experiment we’re going to be relying on the numbers that industry is advertising. Company websites will frequently list what sensor packages their satellites utilize, or the practical FOV and FOR a customer could expect. After all, these companies want business, and want to tell customers what they can reasonably expect to purchase.
In some cases, FOV and FOR are given somewhat conservatively, because you do not want to give commercial clients “bad imagery” that requires extensive difficult processing or expertise to interpret, you want to give them something scalable and good. Companies for the most part can’t sell blurry shots taken at extreme angles, in most cases, even though the satellite is physically capable of pointing in any direction. So caveating this section, FOV and FOR may sometimes be larger than what is presented by the ultimate satellite operator. As such, I also referenced back to the sensor package manufacturer websites, like the Satellite Imaging Corperation, to get build out some more aggressive FOV/FOR simulations.
Next, I built them out in STK and included some notes from their respective sources.
To insure that I was in the right neighborhood, I picked a few satellites that I had images from, and literally just lined up the images I had with my simulation. Literal trial and error, “does this piece of imagery that I have fit into the FOV/FOR calculations that my simulation is using.”
They did, and I unfortunately did not think to screenshot the successes at the time.
But now we have a satellite with its sensor FOV (the little red box) and its FOR (the larger red cone).
The next step was scaling sensors across a constellation. STK (or at least the copy I had at the time, maybe it has changed since then. Or maybe I was using it wrong. Who knows) requires that each sensor be attached individually. And each satellite needed two STK objects each, one for FOV and one for FOR. Lacking the patience to manually build over 100 sensor objects, I wrote a small python script to generate FOV/FOR sensor objects based on the template of an initial sensor and iterate that across an entire identified constellation. Nothing fancy, just saving some time.
Now we’re cooking. Each satellite now had a reasonable simulation of what it could see at any given time.
Now we have a simulation of optical sensing satellites, their sensors, and their target: Sinpo.
We use STK’s Access and Analysis Workbench functions to generate the “access times,” which just means when Sinpo’s position on the globe intersects with a sensor’s FOV/FOR.
Step Four: My Wife Did This Part Actually
Finally, my wife wrote a python script for me that checks if the current time matches any of the times from the Access Times list, and then tweets out when the access first occurs and when it ends.
For the actual tweets, we focused on FOR, not FOV, since FOV is difficult to confirm from the ground without some fancy tech, and a reasonable attempt to hide a ballistic missile operation should assume that the entire FOR is a viable threat.
As long as we periodically generate new TLEs and Access Time sheets (a automatable process), the simulation will stay accurate enough to let me know when a commercial satellite could potentially see me.
What Did We Learn?
We learned that a random analyst in Northern Virginia can set up a monitoring station for commercial imaging satellite views in the course of about a weekend, with software that takes about a week to learn and hardware that costs less than $1000.
Presumably, if you have nation-state money and employees, you can expand this dataset out to include all commercial imaging satellites, including the new SAR hotness. With some nation-state fancy hardware, you can probably start getting a reasonable chunk of the nation-state assets.
So What?
Countries (and non-state actors, for that matter) can know when open-source-contributing satellites are overhead. If they want to influence the narrative by showing off an operation, they can build a timetable to show them when optimal viewing times may occur. If they want to stay out of public view, they can build the converse, because they can know when these satellites can see them. Maybe they don’t care. But don’t assume that they don’t know.
They don’t need to buy satellite imagery, they just need some VPNs to pull the TLE datasets and some hardware to host it.
Nation-state assets are harder to account for, and are part of a more active competition. They can move, they can be built to deflect radar, they can be so far ahead of public technological knowledge that it may not even be clear what they look like.
And I’m not touching ELINT satellites. That’s black magic, even commercially.
But the most important takeaway is this:
The Twitter user who told me in 2016 that the DPRK could never dodge commercial satellite imaging platforms wrong, and I was right.
I don’t know if you are aware of the following journal articles regarding South Korea’s SAR satellite system. There are other related journal articles that might be of interest.
New Thermal Design Strategy to Achieve an 80-kg-Class Lightweight X-Band Active SAR Small Satellite S-STEP
by
Tae-Yong Park
Soletop Co., Ltd., 409 Expo-ro, Yuseong-gu, Daejeon 34051, Korea
2
Satellite System Division, Hanwha System, 304 Cheoin-gu, Yongin City 449-886, Gyeonggi-do, Korea
3
Space Technology Synthesis Laboratory, Department of Smart Vehicle System Engineering, Chosun University (Agency for Defense Development: Additional Post), Pilmun-daero, Dong-gu, Gwangju 61452, Korea
*
Author to whom correspondence should be addressed.
Aerospace 2021, 8(10), 278; https://doi.org/10.3390/aerospace8100278
Innovative Mechanical Design Strategy for Actualizing 80 kg-Class X-Band Active SAR Small Satellite of S-STEP
• Seong-Cheol Kwon, Jinwook Son, +3 authors Hyunjong Oh
• Published 26 May 2021
• Engineering
• Aerospace
TLDR
To validate the feasibility of the innovative mechanical design of S-STEP, a structural analysis considering launch and on-orbit environments is performed and development test results are presented to confirm the effectiveness of the proposed design approach for VFOD.
Aerospace 2021, 8(6), 149; https://doi.org/10.3390/aerospace8060149