Implementing encryption on pagers
Written by Paul Schlusser — Technical Director, Infostream Pty Ltd.
The paging industry is at towards developing a paging encryption standard for FLEX and POCSAG. Operators and manufacturers are concerned about the effects of bandwidth utilisation and pager sensitivity because of a commonly held, but largely mistaken impression that encryption could lead to degradation of either or both.
This paper discusses encryption techniques as they apply to pagers and demonstrates that other than the entirely solvable problem of agreeing on a cryptographic nonce, * a stream ciphered paging message is no more likely to be received incorrectly than an unencrypted one, and aside from the nonce transmission itself, need be no longer than the original message. We also make the case to eschew proprietary systems that rely in part on implementation secrecy as part of their security because they cannot be verified and because they are not interoperable – an increasingly important requirement in todays paging technology environment.
* In security engineering, a nonce is an arbitrary number used only once in a cryptographic communication. Nonce — stands for “number used once” or “number once.”
Encryption is the hot topic on the agenda in the paging business and is generating some excitement and interest in the lead up to the CMA conference in Miami. And for an industry that has been sorely lacking in excitement for nearly a decade, that can only be a good thing! Encryption in paging can mean a lot of things to different people. However in this article, we’re just going to focus on the problem of encryption from paging terminal to pager. This is the area of most concern for a number of reasons, not least of which is that now, thanks to the Internet, scanners and abundant PCs it is almost laughably easy for anyone with even the most rudimentary technical knowledge to snoop on any POCSAG or FLEX paging channel reading every message that goes to air. When those messages contain sensitive operational information on security operations, patient details and the like, it stops being a laughing matter and becomes an issue of legislative compliance and customer/client confidentiality. Unfortunately the subject of encryption is full of mystery, speculation and misinformation. Partly this is in response to the general perceived complexity of the subject (encryption is after all, all about obfuscation) but also it’s fair to say that manufacturers of paging equipment (infrastructure and pagers) have added to that confusion through a desire to protect intellectual property and frankly, just plain bad implementations. For example, there is an all to common mistaken notion that in order for any encryption system to be secure, its method of operation should itself be a secret. Then there is the persistent and nearly universally held idea that encryption will either increase the size of a paging message and/or decrease the sensitivity of the pager.
The purpose of this article is to shine some light on how encryption works in the paging context and hopefully thereby, go some way towards dispelling these mistaken notions. It is based on years of work at Infostream and its sister companies with encrypted pagers and the practical experience of implementing a variety of encryption systems both in paging receivers and infrastructure.
The need for transparency and a public standard
Ironically, for an encryption system to be secure, it needs to be public. In pre-computing times, information was kept secure by hiding it. Secret notes, invisible ink and secret encoding systems were the mainstays of information security. However in the latter half of last century, it was realised that certain mathematical operations could be made so complex and yet sufficiently simple to implement with electromechanical systems and ultimately electronics, that it no longer became necessary to be secretive about how the encryption was done. Instead, one could rely on the unbreakable nature of the encryption system itself for the security, when used with a secret key. That was a fundamental shift in approach.
Early encrypting machines that relied on this principle, such as the German Enigma were actually pretty good given their mechanical limitations, and it took a lot of clever mathematics and computing power to break them. Of course, the workings of the machines themselves were still closely guarded secrets (well, not so much as it turned out), but the secrecy was really just an admission that the machine itself was not perfect – it was after-all, a battlefield appliance constructed with cogs, switches and light-bulbs.
These days, we have powerful computers (even in our smallest and cheapest pagers) that make effectively unbreakable security a practical reality. Mathematicians, scientists and engineers have developed encryption standards that are impenetrable to any feasible means of cryptographic attack even if you know beforehand, exactly how the encryption is done. These standards are called AES128, IDEA, DES, RC4 and the like and you can find the description of how they work, including source code to make them work on Wikipedia (and plenty of other places). The only thing that needs to be kept secure is the encryption key.
However, an unbreakable encryption algorithm doesn’t ensure a secure encryption system. The algorithm, or encryption standard employed is important, but it is all too easy to take a strong algorithm and produce a weak system overall because of implementation flaws or poor design choices. It is not nearly sufficient to disclose or insist that the encryption uses AES 128 bit, or IDEA or whatever standardised cipher is employed. If the implementation is wrong or badly done, the encryption can be as good as useless. The only protection for the paging user is transparency – better, if the system is both transparent and standardised.
Revealing how a system works means that it can be examined for design and implementation flaws. If say, you obtained an opaque black box encryption system from a hostile government based on say, AES128 and they said, “trust me” – would you? Even if you could know for sure that there was AES128 inside somewhere, how would it be possible to know for sure whether it was used properly or indeed, at all?
Even putting aside the obvious benefit to the customer of having interoperable equipment, by standardising the system from the point of encryption to the point of decoding, all manufacturers would of necessity, be forced to examine the system in detail as part of their implementation and any weaknesses will almost certainly be quickly exposed. A single manufacturer, working on their own has neither the resources, nor frankly, the motivation to expose any flaws in a proprietary solution.
That is not to say that open software source is required to be confident of the solution. If the encryption system is standards based and the implementation methodology is clearly disclosed from message plus encryption key to pager screen, with a description and/or block diagram, sufficient for a third party to implement an equivalent, verification of the implementation and security is very straightforward even without source code.
In this article, we’re not going to attempt to propose encryption standards for use in the paging applications, but rather to discuss some of the issues that must be solved to make encryption work, based on a solid theoretical understanding and plenty of practical implementation experience. Specifically we are going to look at the issues of message size increase and pager sensitivity reduction and why these might arise. And importantly, why neither is a necessary outcome of the encryption process.
Before tackling the two issues specifically, lets first have a look at the basic approaches to encryption that are relevant to paging. There are may ways to encrypt messages besides the two covered below and for those interested in the subject and the history of cryptography in general, I’d highly recommend Simon Singh’s extremely readable book on the subject “The Code Book.” However we’ll stick specifically to the subject of paging.
Approaches to encryption — block and stream ciphers
Block ciphers work by dividing a message into fixed sized blocks of data that are then successively encrypted using the encryption algorithm operating with the secret key. Decryption of the message works by performing the exact same operation a second time. In the decryption process, the encrypted message is first divided into blocks that are converted back into the original text using the same algorithm and key. With a few extra twists and precautions taken into account, without the encryption key it is impossible recover the original message.
Some simple (and obvious to a cryptanalyst) mistakes can arise if the most naïve approach described above is used without some precautions. As just one simple example, one might record the paging message over the air and replay it at a later time to create false alarms. Worse still, if someone knew what the original message said (say, by peeking over a shoulder of the paging user), it might be possible to rearrange the order of the blocks in the message to change the meaning of the original message sent, perhaps making the message look like a new meaningful message.
Although there are very straightforward ways to overcome these simplistic forms of attack, the example serves to illustrate that merely claiming to use one particular standard or another doesn’t make for a secure paging system.
Stream ciphers work by combining the original message (usually by performing a bitwise “exclusive or” – which is basically a binary addition function without any carry forward), with a semi-random sequence of numbers or binary bits, generated using the encryption algorithm and the secret key. The semi-random (or pseudo-random) sequence is known as the key-stream. Decoding the message, as with the block cipher, is exactly the same as the encoding – the encoded message is combined again with the key-stream leaving the un-encoded message as the result.
Importantly, stream ciphers do not need to divide the original message into blocks and equally importantly for reasons we’ll cover in a moment, the message is only combined with the key-stream on a bit by bit basis to accomplish the encryption. The original message never passes through the encryption algorithm directly.
The key-stream itself is often generated from a standard block cipher using a seed value that is known in the cryptographic business as a “nonce.” In effect, the block cipher algorithm (AES, IDEA etc.) is used to encrypt the nonce, and the resulting output is the required pseudo random key-stream used in the encryption.
Advantages and disadvantages
Block ciphers are the simplest and most straightforward way to apply an encryption algorithm to a message. By following a few well-understood special precautions, the encryption process can be made impenetrably secure from attack. Bulk encryption of data on the Internet is commonly done using block ciphers.
In the paging context however, the block cipher approach has two major drawbacks. Firstly, the transmission size must always be an even multiple of the encryption block. Because the cipher is necessarily block oriented, any message that doesn’t completely fill the final block must be padded to fill out with extra characters. That process increases the message length and wastes precious airtime.
Secondly, corruption (even a single bit) within a given enciphered block will render the entire block unrecoverable. For paging, that’s very bad – bit errors are a way of life in the paging world.
Of these two listed disadvantages, the second is the real killer. Employing a block cipher for paging is one sure way to dramatically reduce the sensitivity of your pager. Even single bits errors are amplified by the encryption system to render thirty-six entire characters of your message unreadable (assuming a two-fifty-six bit block size divided into seven bit characters).
On the other hand, a stream-ciphered message is exactly the same size as the original plain text. Because stream ciphers operate by pseudo randomly flipping bits in the original message, one bit in the cipher-text corresponds to one bit in the clear-text. No padding is required.
And for similar reasons, the enciphered message is no more susceptible to bit errors than an un-enciphered one. Any single bit corruption in the received message will result in exactly a single bit of corruption in the decoded message. No loss of pager sensitivity will result. That is true for one, two or any number of corrupted bits.
The major, and really only meaningful disadvantage of a stream cipher in the paging context is that a unique nonce is required for each message. The nonce doesn’t need to be a secret, but it must be agreed between the sender and the receiver and it must be unique for each message. Note that the nonce IS NOT the same as the encryption key. The encryption key absolutely must be a secret and in fact it should be the ONLY secret between the parties. The nonce is unique to the message, but the key needn’t be.
The nonce serves to make the key stream used for each message unique. If the same key-stream were used for two messages and one message becomes known (by peeking over a shoulder), then any message encrypted using the same key-stream would be easy to deduce.
Block or stream cypher?
Now having covered some of the relevant basics of encryption, let’s specifically turn to the question of sensitivity loss and paging bandwidth usage.
As shown above, for block ciphers, both loss of sensitivity and increase in bandwidth are absolutely a likely outcome in the paging context. Padding out any given paging message up to the next block of thirty-six (36) or so characters is clearly going to waste a lot of airtime. Similarly, single bit errors that render entire blocks of text unrecoverable will clearly have a dramatic effect on pager sensitivity. For those two reasons alone, nothing more needs to be said about block ciphers from a paging perspective. They fail two fundamental practicality tests. So what about stream ciphers?
On the face of it, stream ciphers appear to be the perfect solution for paging – the encryption process does not enlarge the message, nor are errors amplified. However, there is the troublesome problem of the nonce. Without a nonce, the stream cipher is left wide open to attack. The Germans learnt this to their peril (or rather suffered from it, because they were unaware of their mistake) during World War Two when some very clever Brits were able to reverse engineer an entire high-level command encryption system that they code-named “Tunny.” For the fascinating history and how it led the first true fully electronic computer, see http://en.wikipedia.org/wiki/Lorenz_cipher or visit the fabulous museum at Bletchly Park, next time you are in the UK.
Solving the nonce problem
In the paging context, the fundamental problem for stream ciphers is “how does the pager agree with the message sender about the nonce?”
There are few possible answers to this question described below. Each answer has its merits and disadvantages. By combining the approaches, a very practical approach can be obtained.
The sender tells the receiver what the nonce is, in plain text. This seems counterintuitive but is entirely valid. Because the nonce is used to seed the encryption process, surely sending it in plain text will reduce the security? But in fact it turns out not to be the case. Cryptologists have long since mathematically proven that even if you know that the first nonce used was “one”, the second “two,” and can easily guess than the next will be “three”, you have learned nothing that will practically aid in decrypting subsequent messages.
Learning the nonce does not enable the attacker to deduce the key-stream any more easily, so long as the encryption system used is strong enough (as are all of the well studied international standards). If knowledge of the nonce did allow easier deduction of the key-stream that it produced, that would be considered a fatal flaw in the encryption standard and the standard would have been rejected. Standards have been discarded for a lot less!
Remember, the nonce is NOT the encryption key. The disadvantage of sending the nonce in this way (or sending it at all) is that if it is corrupted (due to bit errors in the transmission), the entire message will be unrecoverable. And of course, the nonce requires airtime to transmit albeit it can be made quite small.
Sending the nonce doesn’t solve the replay attack problem either. Since the nonce is part of the message, it’ll be just as valid tomorrow as it is today.
The nonce can be based on time. Time has the advantage of never repeating itself (ever) and is freely known to everyone. We assume here that the pager is synchronised to some known time-base and thus has an accurate clock. For FLEX, that is easy because it’s a synchronous protocol and has a time-setting standard built in. For POCSAG, it’s more of a problem because it is asynchronous and doesn’t have a (standardised) method for setting the time (something worthy of the CMA technical committee’s attention).
The time-based nonce seems like a great idea. It can be easily known by sender and receiver, it takes no extra airtime per message to transmit, and is inherently immune from corruption so can’t affect the pager sensitivity. However, there are two fundamental problems to be solved for a time-based nonce.
The first is rounding ambiguity. No matter how accurately you can set the pager clock, there is always the slight possibility that when you convert the time to a number to seed the encryption system, the encoder and the pager will come up with different answers to the rounding problem. To illustrate this point, imagine two observers both looking at an analogue clock on the wall that reads 2:25:30 and are asked to tell the time to the nearest minute. Depending on how each observer reads the second hand, the time could be read either as 2:26 (rounded forward) or 2:25 (rounded backwards). If the same observers are looking at two separate digital clocks, albeit reasonably accurately synchronised, there will also be some possibility that when performing the rounding (to the hour, minute, second or millisecond) that two different answers will result.
The second problem is that of paging transmission delay. Time is a feasible way to generate a nonce (assuming the rounding problem is solved), but only if the message is encrypted with a time-based nonce that is accurate at the time of message transmission and consequently, reception. If the original message sender sends a message to the paging switch and there is congestion in the network, some seconds or minutes can elapse before the message goes to air – and by that time, the time-based nonce may be hopelessly out of date. Of course, you could send the time along with the message, but that’s tantamount to sending the nonce and is susceptible to the same problems described above.
One obvious way out of this problem is to have the encryption performed by the paging terminal and employing a nonce selected at the time of transmission. However that imposes a number of (not necessarily fatal) constraints.
For a start, the paging terminal itself must directly support encryption– you can’t just send your encrypted message to any old paging terminal via TAP that then just treats the encrypted message like any regular paging message. That’s not a problem if one can afford to replace or upgrade the terminal, but not so practical otherwise. That said, it’s worth pointing out that the encrypted stream will likely be full of potentially troublesome control characters that can very likely trip up an old encoder not designed (or debugged) to handle them. At Infostream, we’ve seen this problem first hand many times with our financial information pager that operated in over twenty different countries.
Furthermore, if the terminal performs the encryption, it must start with the unencrypted message. Even if one could send the message in an encrypted form to the paging terminal, it must become plain text prior to the over-air encryption and it is at least theoretically readable by the terminal operator/paging carrier. And of course the encryption key must (at least temporarily) be available to the paging terminal. It becomes a question of “who do you trust?” If you are building a private dedicated emergency services paging network, then you probably don’t have a problem. If you are a commercial paging customer using a public carrier, this might be a problem for you.
One advantage that FLEX has over POCSAG is that the time setting of FLEX is inherently unambiguous. While the FLEX time setting may not be perfectly accurate (although it’s pretty darn good), any inaccuracy doesn’t matter. Each frame and cycle in FLEX is both known and agreed upon by the terminal and the pager, so the absolutely accuracy relative to real time is irrelevant. There is perfect agreement between pager and terminal as to what the time is (at least as far as frame and cycle is concerned) and thus rounding errors while generating the nonce are avoidable.
However, that is not to say that POCSAG can’t employ the same trick. Although your average POCSAG pager will assume that a batch may start at any time, the terminal doesn’t have to do it that way. It is quite feasible to set a rule and have the paging encoder only generate a POCSAG batch that has a specific alignment to a given second or other convenient time interval. Let’s say for example, that the POCSAG encoder is designed so that the POCSAG batch (specifically the leading edge of the first bit of the first sync-code) always starts at a time that is an exact second as determined by the paging encoder. That means that for a given transmission, one might have to wait for (on average) about half a second before a batch can be started, but that’s a tiny price to pay – even when the network is congested. When congestion arises, the batching starts to get very long and the half-second delay becomes a very small fraction of the overall key-up time for the network.
Using this quasi-synchronous POCSAG approach, not only is there a very handy way for the pager to set it’s time accurately, it is also a perfectly agreed time between the paging terminal and the pager than can be maintained without too frequent time setting messages. So while it’s not exactly zero additional airtime required to generate the nonce, it’s as close as you’re likely to get.
One additional advantage of the time-based nonce approach is that it is inherently immune from replay attack. Any given time only occurs once, so the same message (say copied off the air and then replayed at a later time) will not be decoded correctly by the pager for a second time.
There are hundreds of ways of broadcasting the time to the pager, and the various technical committees can haggle over the best approach. Fundamentally, it must be unambiguous (not subject to rounding or inaccuracy) and sufficiently robust that the pager always has a good lock on the time. FLEX has this already but for POCSAG, it is a problem yet to be solved in a standardised way. There are plenty of easy to implement options.
As a starting point, we suggest using the code-word position within the POCSAG batch as part of the solution. It cannot be misunderstood by the pager and doesn’t rely on super accurate clocks and thus is much easier for a pager with a low quality crystal to disambiguate. By way of example, Infostream’s original OTA message confirmation system used the code-word position of the message and the time of the POCSAG batch (with much lower resolution) to unambiguously identify messages. The same principles would apply to generating an unambiguous nonce.
Sequential incrementing nonce
The third and final way we’ll discuss is the incrementing nonce approach. In this system, we might use a nonce that simply increments from message to message. The message sender starts by setting the nonce to say, zero, and each subsequent message simply increments the value. That’s pretty straightforward and doesn’t require accurate clocks and is immune to paging transmission delays so the original sender can encrypt the message prior to sending to the terminal. Thus, an intermediary, including the paging carrier, can never read the message.
Incrementing the nonce is also immune to replay. Once the nonce has been used up, the pager and terminal move onto the next and replaying the same message again will not result in a successful decode.
Unfortunately, incrementing the nonce does suffer from the problem of missed messages. If a message is missed, the pager and message sender will have a different idea of what the nonce would be for all messages after the missed one. One possible way of dealing with that is to be able (either automatically or manually) try out the next several sequences and see which one results in a successful decryption. The pager would take the next ten or so sequential nonce values (assuming you wish to guard against up to ten sequentially missed messages) and finds the one that doesn’t result in a whole bunch of unprintable control characters in the message. POCSAG alphanumeric messages uses an ASCII seven-bit character of which roughly one quarter are control characters and do not normally appear in a message. Thus, assuming that the decryption process results in a perfectly random sequence of characters without the correct nonce (and that is the highly desired objective for a good encryption algorithm), any wrong nonce will give a 25% chance of generating a control character for any given character position.
However this method isn’t particularly reliable for short messages. It is only with messages that are forty or more characters in length that the odds of a correct guess are good enough to be reliable. This problem is highly analogous to the problem of trying to guess (without knowing beforehand) whether a given POCSAG message is alphanumeric or numerically encoded. Anyone who’s written software for an off-air decoder for POCSAG will be familiar with the problem. At best, it’s a guess although it can be reasonably reliable if the message is long enough.
An alternative approach to resolving the nonce ambiguity is to encode a known value into the message that can be tested after decryption. Suppose that you insert three known characters at the start of the message prior to encryption. The probability of successfully decoding those three characters with the wrong nonce is roughly one in two million, or one in two hundred thousand if you are testing ten possible nonce values. That’s probably good enough to rely on. To be frank, in the paging context, you have a much high probability of losing a message than one in two hundred thousand for many other reasons than a failure of the encryption system. Even with bit errors possible, so long as the BCH of the pager decoder doesn’t amplify the errors (which is feasible up to four bits, and even likely up to five), the odds of correctly determining the nonce are sufficiently high for practical purposes.
Car door remote controls with rotating codes use a similar approach and often scan up to a few dozen or even a hundred different codes ahead looking for a match. If you exceed that number of missed activiations, a special procedure is required to re-synchronise the remote with the receiver in the car. The same would apply to the encrypted pager.
In the context of a typical eighty character alphanumeric message, those three character represent less than four percent (4%) additional message payload. Technically, that’s a small loss in spectrum efficiency. The odds of a missed guess improve more than a hundred fold for each additional character and the immunity from bit errors improves. It just depends on the trade off to be made. There is little point in a system that has a one in a million chance of failure when losing a message through other means (out of range, fading, multipath etc.) is much closer to one in a thousand (on a good day).
The foregoing discussion on resolving ambiguity of an incrementing nonce assumes that you are relying solely on the pager to do the work. Another approach is to provide the end-user with a mechanism to manually step the nonce forward to account for any messages he/she might have missed. If the message they see on the screen looks obviously garbled, pressing a “missed message” button a few times would likely result in a good decode that to a human reader will be unambiguously the correct one. With enough computing power on-hand, and enough understanding of what makes a readable message, the pager might do ninety percent as well.
Other possible reason for loss of sensitivity
In the interest of thoroughness, it’s worth covering one additional mechanism whereby a pager’s sensitivity might reduce because of encryption in an effort to understand the common (mis)perception. And that mechanism is that the CPU load of performing the decryption might desensitise the paging receiver. It’s no secret that there are plenty of pager designs in the field that suffer from CPU related desensitisation. In consulting for paging carriers around the world, Infostream has measured pagers that desensitise by as much as 10 dB due to their backlight alone! Most reputable pager manufacturers however, have this problem well and truly under control. Buying the cheapest of the cheap and you take the risk. But those are not likely to be encrypted pagers.
The original Motorola Adviser pagers (and the subsequently derived designs, many still in production), famously display an envelope symbol on the screen while receiving a message and it’s a fair bet that this wasn’t some handy feature to inform the user that a message is about to arrive.
Of course any hypothetical desensitisation that would occur would only do so after the message is received (assuming that decryption only occurs after the message is received in full) and so the desensitisation effect, if there were any, would only affect subsequently received messages during the short time that decryption takes place. That’s a potential problem, but a slight one and certainly not an inevitable consequence of the decryption process itself.
Of the two basic approaches discussed to encrypting a pager signal over the air, block and stream ciphers, stream ciphers are the clear winner in the paging context. Stream ciphers can be generated from any of the international standard block ciphers, including 128 bit AES even with limited computing power available to the pager. Bit errors are not amplified and messages do not need to be padded to fill entire blocks, so neither increased bandwidth usage nor reduced pager sensitivity will result.
When it comes to generating a nonce to seed the cipher stream, a time based system is the clear winner as long as it is possible to implement the encryption in the paging encoder/terminal rather than sending the message encrypted from the source. Of course, you can encrypt the message from source to terminal as a separate process if that security is required. Mostly however, the problem relates to snooping the message over the air, because today, that is laughably easy to accomplish.
If the CMA is considering an encryption standard, the biggest question to ask is whether the encryption takes place at the encoder (opening up some useful possibilities) or at the original message sender (which presents some challenges). The encryption from sender to terminal is a fundamentally different communication and computing problem than the encryption from terminal to pager. Solving both encryption problems with a single technique is far from ideal. Given that many existing paging terminals are known to struggle with the arbitrary character streams that an encryption system would likely generate, encryption from the source suffers from some practical implementation problems already.
Even pagers that rely on traditional decoder chips (rather than implementing the decoding in a general purpose CPU) can still use the time-derived nonce approach for POCSAG using the techniques outlined in this paper, so retrofitting encryption to older pager hardware ought not be a problem – assuming that CPU and memory constraints allow for the additional encryption software.
There are many other issues to be resolved in defining an encryption standard for paging. Security of key management is certainly an issue in the paging terminal based encryption system but again, this depends on the type of system envisioned. Large dedicated public safety networks would benefit from a centralised key management system integrated into the paging switch. For public carriers, their customers might reasonably object.
Infostream supports the overdue initiative for an open industry-wide paging encryption standard. We believe that the industry should not support a standard that appreciably increases bandwidth utilisation or desensitises the pager because of certain parts the transmitted message that are more susceptible to bit errors. Neither of these are necessary outcomes of implementing encryption, if done correctly.
Nor would we recommend any paging user (on either side of the value chain) to rely on a proprietary system whose end-to-end method of operation is not available for public scrutiny and peer review. Merely claiming adherence to a standard, no matter how inherently secure, is not sufficient and secrecy of implementation should be regarded with suspicion, and not as an aid to security.
Infostream is a manufacturer of paging infrastructure and specialised pagers. Infostream has produced speciality pagers utilising encryption for financial market, medical and public safety applications for more than fifteen years.
[FLEX is a registered trademark of Motorola, Inc.]