CCSDS Concept Paper

Future Spacecraft Systems Designs
  Neptune and Uranus
 - or -
Interstellar Missions of Multi Generational Duration

Much has been learned from the Pioneer Program, Voyager Program, Galileo and Cassini-Huygens spacecraft on what spacecraft designs are most optimal for deep space scientific research.

Suggested subsystem minimal requirements

Absolute must requirements


There must be absolutly 3 command and control computers, for each kind of command and control.

Sadly, there are missions in deep space that do not have 3 redundant computers



Gyroscopic Stabilization


Cassini is a three-axis-stabilized spacecraft and not a spin stabilized craft. There is a set of four Reaction Wheel Assemblies (RWAs). Only three RWAs are needed for spacecraft control, with one spare. Biasing the RWAs is important for spacecraft stabilization and it is accomplished by firing the RCS thrusters while changing the RWA rotation speeds, all while the spacecraft attitude (pointing) remains fixed. RWA biases are done quite frequently and are relatively heavy users of hydrazine, but they are absolutely necessary to keep the wheel speeds within safe ranges.


Electrical Power
Radioisotope Thermoelectric Generators (RTGs) are absolutely necessary as no solar power is really possible at such great distances from the Sun.

It is assumed that the Multi-Mission Radioisotope Thermoelectric Generator (MMRTG) are a type of developed for NASA space missions such as Mars Science Laboratory and may be used on the Jupiter Europa Orbiter.
General power recommendations


General recommendations
CPU systems worth consideration (Primary Computers)
The RAD750 is a radiation-hardened single board computer, based on IBM's PowerPC 750. The RAD750 is manufactured by BAE Systems. It is intended for use in high radiation environments such as experienced on board satellites and spacecraft. The RAD750 was released for purchase in 2001 and the first units were launched into space in 2005.
CPU systems worth consideration (Secondary Computers)
The New Horizons spacecraft carries two computer systems, the Command and Data Handling system and the Guidance and Control processor. Each of the two systems is duplicated for redundancy, giving a total of four computers. The processor used is the Mongoose-V, a 12 MHz radiation-hardened version of the MIPS R3000 CPU.

The Mongoose-V 32-bit microprocessor for spacecraft onboard computer applications is a radiation-hardened and expanded 10–15 MHz version of the MIPS R3000 CPU.
Mongoose-V Features
Software Development Boards with 32-bit Rad-Hard Mongoose-V microprocessor

Command and Data Handling

Navigation and Systems Control

Each of the two systems is duplicated for redundancy, giving a total of four computers. The processor used is the Mongoose-V, a 15 MHz radiation-hardened version of the MIPS R3000 CPU -- that should be capable of running at 2.5 MHz during sleep or dormant phases of the mission.

Multiple clocks and timing routines should be implemented in hardware as well as software to help prevent faults or downtime.

Image Processing

Data Storage

Data Storage

Solid state recorder
A minimum of 3 low-power solid-state recorders (one primary, one backup, one interstellar backup -- not used until interstellar mission begins) holding up to 8 gigabytes (64 gigabits) each. One should assume that there is about 2% less available for data storage due to file system overhead.

Image Processing


Notes about light levels on the 2 outer gas giants

Uranus / Earth Comparison

Neptune / Earth Comparison

As a general rule, visible light cameras should be able to cope with 1 Watt per square meter
As a general rule, infrared cameras should be able to cope with 40 K blackbody imaging conditions.

Breakdown of cameras by megapixel capabilities and bit depths.

RULE : All cameras should be autonomous to whatever extent is possible. The older Voyager Program arrangement of having camara controls linked to the flight subsystem should no longer be needed. Each camera should have its own 2 core (or 4 core) 32 bit CPU with Error Correcting Memory and an autonomous pointing system. Each camera should also have its own command and control system that is able to obtain spacecraft state to determine when to acquire images.

Visible Light Visible Light Navigation Cameras
Craft observing cameras

Telecommunications Subsystems

The difficulty of establishing a link between a spacecraft and a DSN station is, to a first order, a function of the required data rate and the square of the distance over which the link is occurring. Hence, a simple measure of end-to-end link difficulty can be obtained by taking the product of the data rate and the square of the distance. Note that this measure makes no assumptions about the telecommunications capabilities of either the spacecraft or the DSN ground station at each end of the link. It is simply indicative of the inherent difficulty of the link itself. Over the next 25 years, this downlink difficulty is expected to increase by roughly two-and-a-half orders-of-magnitude.

End-to-end uplink difficulty trends are, of course, similar to those of the downlink difficulty trends and involve roughly the same driver missions. However, because of the asymmetry between uplink and downlink rates for robotic missions discussed earlier, the effective isotropic radiated power needed to support the uplink to such missions is within the capability of the current DSN (assuming appropriately sized high-gain antennas onboard the spacecraft and forward-error correction coding on the uplink when needed).

However, such requirements and their associated link difficulties are generally bounded by the problem of providing emergency uplink at outer planet distances. The DSN’s current 70-meter, 20 kW, X-band capability enables spacecraft at Jupiter’s maximum distance from the Earth to receive a 7.8125 bps emergency transmission via their omni-antennas. To enable emergency uplink into an omni antenna at greater distances, the equivalent of 10 to 20 times the current 20 kW capability on a 70 m uplink dishes is needed, depending upon one’s spacecraft assumptions (e.g., system temperature, receiver loop bandwidth, etc.).

CCSDS suggested using bandwidth efficient technique compatible with Block V receiver structure in a deep space network. JPL is now researching deep space exploring mission (eg. MRO) modulation schemes suitable for future high data rate (eg.10-100Mbps) by combining high efficient error correction code (eg. Turbo and LDPC code) including

Known problems of deep space communications
  1. Long Distance : A lot of planets in deep space are several hundred million kilometres away from the earth. Such long distance results in very low signal to noise ratio (SNR).
  2. High Signal Propagation Delays : This is due to the enormous distances involved between the communicating entities and the relativistic constraint restricting signal transmissions to the speed of light. For example, one-way signal propagation delays for the Cassini mission to Saturn are in the range of 1 hour and 8 minutes to 1 hour and 24 minutes.
  3. High Data Corruption Rates : Extremely long distances cause the signals to be received at extremely low strengths at the receiver, and thereby increase the probability of bit-errors in the channel due to random thermal noise errors, burst errors due to solar flares, etc.
  4. Disruption Events : Since communicating entities in deep-space tend to be in motion relative to one another, the communication channel between them is prone to disruption. A planetary probe on the surface of Saturn’s moon Titan, for example, could experience disruption due to the rotation of Titan on its own axis (when it goes to the night side of Titan), when Titan passes under Saturn’s shadow during its revolution around the planet, and when other moons / planets/or the Sun itself block the line of sight to the destination.

General goals
Downlink prioritization
A "Data Priority Paradigm" with 3 tunable parameters of control for applications, namely Immediacy, Probability of Delivery and Mission Value.

Ideally the system should be based on using Immediacy and "Probability of Delivery" but "Mission Value" should act like a 'gamma correction' factor to tweak up the delivery of some data streams or image files.
  1. Immediacy : Immediacy is a notion of how urgently a unit of application data (called a "job" in the subsequent discussion) needs to be received. For example, a message from a science instrument entering a critical state implying status such as \Instrument too hot", \Low battery condition" or commands such as "Stop! Don't go down that" sent from Earth, might need to be reported ahead of all other experimental results / commands. We say that such a job has higher immediacy requirements over the others.
  2. Probability of Delivery (PoD) : In the space environment where there is always a non-trivial bit-error rate on the channel, the communication channel cannot guarantee 100% reliable data delivery. Some application jobs may seek higher PoD guarantees compared to others however. For example, the picture of a microbe found on the surface of Titan may be of much more importance to the mission compared to regular house-keeping telemetry data, and thus may need higher probability of delivery requirements.
  3. Mission Value : How valuable is the data to the mission. "Mission Value" should act like a 'gamma correction' factor to tweak up the delivery of some data streams or image files.

How this might work

Here, we consider various mechanisms that could be used to guarantee the priority requirements of application jobs.
  1. Adapting the error correction mechanisms We might choose to increase the quality of Forward Error Correction (FEC) mechanisms in use for the higher priority jobs and thereby improving the chances of transmitting them successfully, with the additional overhead this would impose.
  2. Modulating the number of redundant transmissions In a scenario as is typical in current deep-space missions, the FEC mechanisms and frame sizes tend to be used for a mission phase. In such a case, the only option we may have to guarantee the job priority requirements might be to transmit the frames redundantly in original transmission. We believe that such a simple mechanism is being used in current missions when the command / data to be sent is extremely critical.
  3. Adapting the frame size. We might choose to decrease the size of datalink frames for the higher priority application tasks, improving the chances of successful transmission of each frame. This would introduce additional overhead in data transmission as the header data to application data ratio would increase.

Error Correction

Lessons from the Voyager Program and Galileo craft

The Galileo spacecraft was only able to transmit mission data through a low-gain antenna because the high-gain antenna on board the spacecraft has refused to deploy properly and was essentially useless. This failure scenario should be avoided at all costs by using fixed, solid antennas -- but this failure scenario risk cannot be eliminated.

The data rate from this antenna was not designed to exceed 100 bits per second. To offset some of the perform ante loss, the spacecraft’s computer had to be extensively reprogrammed to include new data compression and coding algorithms.

The baseline coding system for the low gain antenna mission uses a Reed-Solomon code of block length 255 concatenated with a (14, 1/4) convolutional inner code, and interleaves the Reed-Solomon symbols to depth 8. The convolousionally encoded symbols are decoded by a maximum likelihood (Viterbi) decoder. Each Reed-Solomon decoded codeword is then decoded algebraically.

In order to fix the problem of only being able to use the low gain antenna 2 types of decoding enhancements were proposed. These coding enhancements involve “re-decoding” of some of the data.

The first type of re-decoding is confined to the Reed-Solomon decoder and utilizes information from neighbouring codewords within the same interleaved block to erase unreliable symbols in un-decoded words. The second type involves re-decoding by the Viterbi decoder, using information fed back from codewords that have been successfully de-clocked by the Reed-Solomon decoder.

Several conclusions were drawn from the analysis and delivered to the (Galileo mission planners. These comparisons are valid for the Galileo system using a (14, 1/4) convolutional coded depth-8 interleaving of Reed-Solomon symbols, and achieving a final decoded bit error rate of 1 x 10–7. A second stage of Viterbi decoding without any Reed-Solomon erasure declarations is worth about 0.37 dB relative to the baseline system.

Adding two more stages of Viterbi decoding is worth an additional 0.19 dB for a total gain of 0.56 dB. The marginal additional improvement from utilizing erasure declarations was shown to be around 0.19 dB for one-stage decoding (no Viterbi re-decoding), but only 0.02 dB for two-stage decoding and essentially nil (0.00 dB) for four-stage decoding.

General recommendations

Primary Communication System (downlink baudrate > 300 bps)

Engineering Telemetry System (downlink baudrate < 300 bps)
Uplink Command System

Uplink Software Update System

Ancillary craft

Created by
Original ideas
Document Created
Last modified
Revision state

Max Power
15 April 2007
11 November 2009
25 November 2013
Version 0.55

CCSDS Concept Papers :

CCSDS Concept Papers are working documents of the Consultative Committee for Space Data Systems (CCSDS), its Areas, and its Working Groups.

Note that other groups may also distribute working documents as CCSDS Concept Papers.

CCSDS Concept Papers have no official status and are simply the vehicle by which technical suggestions are made visible to the CCSDS. They are not part of any  archival document series and these documents should not be cited or quoted in any formal document.

Unrevised documents placed in the CCSDS Concept Papers directories have a maximum life of nine months. If a document progresses to a CCSDS Proposed Standard, it will be replaced in the CCSDS Concept Papers Directories with an announcement to that effect.