MD5 (Message-Digest algorithm 5)

In cryptography, MD5 (Message-Digest algorithm 5) is a widely used cryptographic hash function with a 128-bit hash value. As an Internet standard (RFC 1321), MD5 has been employed in a wide variety of security applications, and is also commonly used to check the integrity of files. An MD5 hash is typically expressed as a 32-character hexadecimal number.
MD5 was designed by Ronald Rivest in 1991 to replace an earlier hash function, MD4. In 1996, a flaw was found with the design of MD5; while it was not a clearly fatal weakness, cryptographers began to recommend using other algorithms, such as SHA-1 (which has meanwhile been found vulnerable itself). In 2004, more serious flaws were discovered making further use of the algorithm for security purposes questionable.
Vulnerability
Because MD5 makes only one pass over the data, if two prefixes with the same hash can be constructed, a common suffix can be added to both to make the collision more reasonable.
Because the current collision-finding techniques allow the preceding hash state to be specified arbitrarily, a collision can be found for any desired prefix; that is, for any given string of characters X, two colliding files can be determined which both begin with X.
All that is required to generate two colliding files is a template file, with a 128-byte block of data aligned on a 64-byte boundary, that can be changed freely by the collision-finding algorithm.
Recently, a number of projects have created MD5 "rainbow tables" which are easily accessible online, and can be used to reverse many MD5 hashes into strings that collide with the original input, usually for the purposes of password cracking. However, if passwords are combined with a salt before the MD5 digest is generated, rainbow tables become much less useful.
Applications
MD5 digests have been widely used in the software world to provide some assurance that a transferred file has arrived intact. For example, file servers often provide a pre-computed MD5 checksum for the files, so that a user can compare the checksum of the downloaded file to it. Unix-based operating systems include MD5 sum utilities in their distribution packages, whereas Windows users use third-party applications.
However, now that it is easy to generate MD5 collisions, it is possible for the person who created the file to create a second file with the same checksum, so this technique cannot protect against some forms of malicious tampering. Also, in some cases the checksum cannot be trusted (for example, if it was obtained over the same channel as the downloaded file), in which case MD5 can only provide error-checking functionality: it will recognize a corrupt or incomplete download, which becomes more likely when downloading larger files.
MD5 is widely used to store passwords. To mitigate against the vulnerabilities mentioned above, one can add a salt to the passwords before hashing them. Some implementations may apply the hashing function more than once—see key strengthening.
Algorithm
MD5 processes a variable-length message into a fixed-length output of 128 bits. The input message is broken up into chunks of 512-bit blocks; the message is padded so that its length is divisible by 512. The padding works as follows: first a single bit, 1, is appended to the end of the message. This is followed by as many zeros as are required to bring the length of the message up to 64 bits fewer than a multiple of 512. The remaining bits are filled up with a 64-bit integer representing the length of the original message.
The main MD5 algorithm operates on a 128-bit state, divided into four 32-bit words, denoted A, B, C and D. These are initialized to certain fixed constants. The main algorithm then operates on each 512-bit message block in turn, each block modifying the state. The processing of a message block consists of four similar stages, termed rounds; each round is composed of 16 similar operations based on a non-linear function F, modular addition, and left rotation. Figure 1 illustrates one operation within a round. There are four possible functions F; a different one is used in each round:




denote the XOR, AND, OR and NOT operations respectively.

Pseudocode
Pseudocode for the MD5 algorithm follows.
//Note: All variables are unsigned 32 bits and wrap modulo 2^32 when calculating
var int[64] r, k

//r specifies the per-round shift amounts
r[ 0..15] := {7, 12, 17, 22, 7, 12, 17, 22, 7, 12, 17, 22, 7, 12, 17, 22}
r[16..31] := {5, 9, 14, 20, 5, 9, 14, 20, 5, 9, 14, 20, 5, 9, 14, 20}
r[32..47] := {4, 11, 16, 23, 4, 11, 16, 23, 4, 11, 16, 23, 4, 11, 16, 23}
r[48..63] := {6, 10, 15, 21, 6, 10, 15, 21, 6, 10, 15, 21, 6, 10, 15, 21}

//Use binary integer part of the sines of integers as constants:
for i from 0 to 63
k[i] := floor(abs(sin(i + 1)) × (2 pow 32))

//Initialize variables:
var int h0 := 0x67452301
var int h1 := 0xEFCDAB89
var int h2 := 0x98BADCFE
var int h3 := 0x10325476

//Pre-processing:
append "1" bit to message
append "0" bits until message length in bits ≡ 448 (mod 512)
append bit (bit, not byte) length of unpadded message as 64-bit little-endian integer to message

//Process the message in successive 512-bit chunks:
for each 512-bit chunk of message
break chunk into sixteen 32-bit little-endian words w[i], 0 ≤ i ≤ 15

//Initialize hash value for this chunk:
var int a := h0
var int b := h1
var int c := h2
var int d := h3

//Main loop:
for i from 0 to 63
if 0 ≤ i ≤ 15 then
f := (b and c) or ((not b) and d)
g := i
else if 16 ≤ i ≤ 31
f := (d and b) or ((not d) and c)
g := (5×i + 1) mod 16
else if 32 ≤ i ≤ 47
f := b xor c xor d
g := (3×i + 5) mod 16
else if 48 ≤ i ≤ 63
f := c xor (b or (not d))
g := (7×i) mod 16

temp := d
d := c
c := b
b := b + leftrotate((a + f + k[i] + w[g]) , r[i])
a := temp

//Add this chunk's hash to result so far:
h0 := h0 + a
h1 := h1 + b
h2 := h2 + c
h3 := h3 + d

var int digest := h0 append h1 append h2 append h3 //(expressed as little-endian)
//leftrotate function definition
leftrotate (x, c)
return (x << c) or (x >> (32-c));

Note: Instead of the formulation from the original RFC 1321 shown, the following may be used for improved efficiency (useful if assembly language is being used - otherwise, the compiler will generally optimize the above code):
(0 ≤ i ≤ 15): f := d xor (b and (c xor d))
(16 ≤ i ≤ 31): f := c xor (d and (b xor c))
[edit] MD5 hashes
The 128-bit (16-byte) MD5 hashes (also termed message digests) are typically represented as a sequence of 32 hexadecimal digits. The following demonstrates a 43-byte ASCII input and the corresponding MD5 hash:
MD5("The quick brown fox jumps over the lazy dog")
= 9e107d9d372bb6826bd81d3542a419d6
Even a small change in the message will (with overwhelming probability) result in a completely different hash, due to the avalanche effect. For example, changing d to e:
MD5("The quick brown fox jumps over the lazy eog")
= ffd93f16876049265fbaef4da268dd0e
The hash of the zero-length string is:
MD5("")
= d41d8cd98f00b204e9800998ecf8427e

Pretty Good Privacy (PGP)

How PGP encryption works
PGP encryption uses public-key cryptography and includes a system which binds the public keys to a user name.
Encryption/decryption
PGP message encryption normally uses both asymmetric key encryption and symmetric key encryption algorithms.
Commonly, when encrypting a message, the sender uses the public key half of the recipient's key pair to encrypt a symmetric cipher session key. That session key is used, in turn, to encrypt the plaintext of the message. There are several other operational modes (eg, symmetric key operation only), but these are less commonly used.
The recipient of a PGP-encrypted message decrypts the session key using his private key (the session key was encrypted by the sender using his public key). Next, he decrypts the ciphertext of the message using the session key.
Use of two ciphers in this way was chosen, despite higher complication, in part because of the very considerable difference in operating speed between asymmetric key and symmetric key ciphers (the difference is often a factor of 1000 or more). This approach also makes it easily possible to send the same encrypted message to two or more recipients.
The entire encryption and decryption operations are completely automated in current PGP desktop versions. Many PGP users' public keys are available to all from the many PGP key servers around the world, most of which coordinate their records so as to act as mirror sites for each other.
Digital signatures
A similar strategy is used to detect whether a message has been altered since it was completed (the message integrity property), and whether it was actually sent by the person/entity claimed to be the sender (a digital signature). In PGP, it is used by default in conjunction with encryption, but can be applied to plaintext as well. The sender uses PGP to create a digital signature for the message with either the RSA or DSA signature algorithms. To do so, PGP computes a hash (also called a message digest) from the plaintext, and then creates the digital signature from that hash using the sender's private key.
The message recipient uses the sender's public key and the digital signature to recover the original message digest. He compares this message digest with the message digest he computed himself from the (recovered) plaintext. If the signature matches the received plaintext's message digest, it must be presumed (to a very high degree of confidence) that the message received has not been tampered with, either deliberately or accidentally. As well, since it was properly signed, it is very likely (to a very high degree of confidence) that the claimed sender actually did send it.
Web of trust
Both when encrypting messages and when verifying signatures, it is critical that the public key one uses to send messages to someone or some entity actually does 'belong' to the intended recipient. Simply downloading a public key from somewhere is not overwhelming assurance of that association; deliberate (or accidental) spoofing is possible. PGP has, from its first versions, always included provisions for distributing a user's public keys in an 'identity certificate' which is so constructed cryptographically that any tampering (or accidental garble) is readily detectable. But merely making a certificate effectively impossible to modify undetectably is also insufficient. It can prevent corruption only after the certificate has been created, not before. Users must also ensure by some means that the public key in a certificate actually does belong to the person/entity claiming it. From its first release, PGP products have included an internal certificate 'vetting scheme' to assist with this; it has been called a web of trust. A given public key (or more specifically, information binding a user name to a key) may be digitally signed by a third party user to attest to the association between someone (actually a user name) and the key. There are several levels of confidence which can be included in such signatures. Although many programs read and write this information, few (if any) include this level of certification when calculating whether to trust a key.
The web of trust protocol was first described by Zimmermann in the manual for PGP version 2.0:
As time goes on, you will accumulate keys from other people that you may want to designate as trusted introducers. Everyone else will each choose their own trusted introducers. And everyone will gradually accumulate and distribute with their key a collection of certifying signatures from other people, with the expectation that anyone receiving it will trust at least one or two of the signatures. This will cause the emergence of a decentralized fault-tolerant web of confidence for all public keys.
The web of trust mechanism has advantages over a centrally managed PKI scheme, but has not been universally used. Users have been willing to accept certificates and check their validity manually, or to simply accept them. The underlying problem has found no satisfactory solution.
Certificates
In the (more recent) OpenPGP specification, trust signatures can be used to support creation of certificate authorities. A trust signature indicates both that the key belongs to its claimed owner and that the owner of the key is trustworthy to sign other keys at one level below their own. A level 0 signature is comparable to a web of trust signature, since only the validity of the key is certified. A level 1 signature is similar to the trust one has in a certificate authority because a key signed to level 1 is able to issue an unlimited number of level 0 signatures. A level 2 signature is highly analogous to the trust assumption users must rely on whenever they use the default certificate authority list (like those included in web browsers); it allows the owner of the key to make other keys certificate authorities.
PGP versions have always included a way to cancel ('revoke') identity certificates. A lost or compromised private key will require this if communication security is to be retained by that user. This is, more or less, equivalent to the certificate revocation lists of centralized PKI schemes. Recent PGP versions have also supported certificate expiration dates.
The problem of correctly identifying a public key as belonging to a particular user is not unique to PGP. All public key / private key cryptosystems have the same problem, if in slightly different guise, and no fully satisfactory solution is known. PGP's original scheme, at least, leaves the decision whether or not to use its endorsement/vetting system to the user, while most other PKI schemes do not, requiring instead that every certificate attested to by a central certificate authority be accepted as correct.
Security quality
To the best of publicly available information, there is no known method which will allow a person or group to break PGP encryption by cryptographic, or computational means. Early versions of PGP have been found to have theoretical vulnerabilities and so current versions are recommended. Indeed, in 1996, cryptographer Bruce Schneier characterized an early version as being "the closest you're likely to get to military-grade encryption."[1] In contrast to security systems/protocols like SSL which only protect data in transit over a network, PGP encryption can also be used to protect data in long-term data storage such as disk files.
The cryptographic security of PGP encryption depends on the assumption that the algorithms used are unbreakable by direct cryptanalysis with current equipment and techniques. For instance, in the original version, the RSA algorithm was used to encrypt session keys; RSA's security depends upon the one-way function nature of mathematical integer factoring. New, now unknown, integer factorization techniques might, therefore, make breaking RSA easier than now, or perhaps even trivially easy. However, it is generally presumed by informed observers that this is an intractable problem, and likely to remain so. Likewise, the secret key algorithm used in PGP version 2 was IDEA, which might, at some future time, be found to have a previously unsuspected cryptanalytic flaw. Specific instances of current PGP, or IDEA, insecurities -— if they exist -— are not publicly known. As current versions of PGP have added additional encryption algorithms, the degree of their cryptographic vulnerability varies with the algorithm used. In practice, each of the algorithms in current use is not publicly known to have cryptanalytic weaknesses.

CRYPTOGRAPHY

CRYPTOGRAPHY
Cryptography is, traditionally, the study of ways to convert information from its normal, comprehensible form into an obscured guise, unreadable without special knowledge — the practice of encryption. In the past, cryptography helped ensure secrecy in important communications, such as those of spies, military leaders, and diplomats. In recent decades, the field of cryptography has expanded its remit. Examples include schemes like digital signatures and digital cash, digital rights management for intellectual property protection, and securing electronic commerce. Cryptography is now often built into the infrastructure for computing and telecommunications; users may not even be aware of its presence.
In cryptology, RSA is an algorithm for public-key cryptography. It was the first algorithm known to be suitable for signing as well as encryption, and one of the first great advances in public key cryptography. RSA is widely used in electronic commerce protocols, and is believed to be secure given sufficiently long keys and the use of up-to-date implementations.


Padding schemes
When used in practice, RSA is generally combined with some padding scheme. The goal of the padding scheme is to prevent an number of attacks that potentially work against RSA without padding:
• When encrypting with low encryption exponents (e.g., e = 3) and small values of the m, (i.e. m is less than n1/e) the result of me is strictly less than the modulus n. In this case, ciphertexts can be easily decrypted by taking the eth root of the ciphertext over the integers.
• Because RSA encryption is a deterministic encryption algorithm – i.e., has no random component – an attacker can successfully launch a chosen plaintext attack against the cryptosystem, by encrypting likely plaintexts under the public key and test if they are equal to the ciphertext. A cryptosystem is called semantically secure if an attacker cannot distinguish two encryptions from each other even if the attacker knows (or has chosen) the corresponding plaintexts. As described abouve, RSA without padding is not semantically secure.
• RSA has the property that the product of to ciphertexts is equal to the encryption of the product of the respective plaintexts. That is Because of this multiplicatvive property a chosen-ciphertext attack is possible. E.g. an attacker, who wants to know the decryption of a ciphertext c=me mod n may ask the holder of the secret key to decrypt an unsuspiciously looking ciphertext cremod n for some value r chosen by the attacker. Because of the multiplicative property this is the encryption of mrmod n. Hence, if the attacker is successful with the attack, he will learn mrmod n from which he can derive the message m by multiplying mr with the modular inverse of r modulo n.
To avoid these problems, practical RSA implementations typically embed some form of structured, randomized padding into the value m before encrypting it. This padding ensures that m does not fall into the range of insecure plaintexts, and that a given message, once padded, will encrypt to one of a large number of different possible ciphertexts.
Standards such as PKCS have been carefully designed to securely pad messages prior to RSA encryption. Because these schemes pad the plaintext m with some number of additional bits, the size of the un-padded message M must be somewhat smaller. RSA padding schemes must be carefully designed so as to prevent sophisticated attacks which may be facilitated by a predictable message structure. Early versions of the PKCS standard (i.e. PKCS #1 up to version 1.5) used a construction that turned RSA into a semantically secure encryption scheme. This version was later found vulnerable to a practical adaptive chosen ciphertext attack. Later versions of the standard include Optimal Asymmetric Encryption Padding (OAEP), which prevents these attacks. The PKCS standard also incorporates processing schemes designed to provide additional security for RSA signatures, e.g., the Probabilistic Signature Scheme for RSA (RSA-PSS).
Signing messages
Suppose Alice uses Bob's public key to send him an encrypted message. In the message, she can claim to be Alice but Bob has no way of verifying that the message was actually from Alice since anyone can use Bob's public key to send him encrypted messages. So, in order to verify the origin of a message, RSA can also be used to sign a message.
Suppose Alice wishes to send a signed message to Bob. She produces a hash value of the message, raises it to the power of d mod n (as she does when decrypting a message), and attaches it as a "signature" to the message. When Bob receives the signed message, he raises the signature to the power of e mod n (as he does when encrypting a message), and compares the resulting hash value with the message's actual hash value. If the two agree, he knows that the author of the message was in possession of Alice's secret key, and that the message has not been tampered with since.
Note that secure padding schemes such as RSA-PSS are as essential for the security of message signing as they are for message encryption, and that the same key should never be used for both encryption and signing purposes.

Digital Signature Algorithm

The Digital Signature Algorithm (DSA) is a United States Federal Government standard or FIPS for digital signatures. It was proposed by the National Institute of Standards and Technology (NIST) in August 1991 for use in their Digital Signature Standard (DSS), specified in FIPS 186 [1], adopted in 1993. A minor revision was issued in 1996 as FIPS 186-1 [2], and the standard was expanded further in 2000 as FIPS 186-2 [3].
DSA is covered by U.S. Patent 5,231,668 , filed July 26, 1991, and attributed to David W. Kravitz, a former NSA employee. This patent was given to "The United States of America as represented by the Secretary of Commerce, Washington, D.C." and the NIST has made this patent available world-wide royalty-free. [4] Dr. Claus P. Schnorr claims that his U.S. Patent 4,995,082 covers DSA; this claim is disputed.
Key generation
Key generation has two phases. The first phase is a choice of algorithm parameters which may be shared between different users of the system:
Choose a cryptographic hash function H. In the original DSS, H was always SHA-1, but stronger hash functions from the SHA family are also in use. Sometimes the output of a newer hash function is truncated to the size of an older one for compatibility with existing key pairs.
Decide on a key length L. This is the primary measure of the cryptographic strength of the key. The original DSS constrained L to be a multiple of 64 between 512 and 1024 (inclusive). Later, FIPS-186-2, change notice 1 specifies that L should always be 1024. Later yet, NIST 800-57 recommends lengths of 2048 (or 3072) for keys with security lifetimes extending beyond 2010 (or 2030).
Choose a prime q with the same number of bits as the output of H.
Choose a L-bit prime p such that p–1 is a multiple of q.
Choose g, a number whose multiplicative order modulo p is q. This may be done by setting g = h(p–1)/q mod p for some arbitrary h (1 < h < p-1), and trying again if the result comes out as 1. Most choices of h will lead to a usable g; commonly h=2 is used.
The algorithm parameters (p, q, g) may be shared between different users of the system. The second phase computes private and public keys for a single user:
Choose x by some random method, where 0 < x < q.
Calculate y = gx mod p.
Public key is (p, q, g, y). Private key is x.
The forthcoming FIPS 186-3 (available as a draft [5]) uses SHA-224/256/384/512 as the hash function, q of size 224 and 256 bits, and L equal to 2048 and 3072, respectively.
There exist efficient algorithms for computing the modular exponentiations ha mod p and gx mod p.
Signing
Generate a random per-message value k where 0 < k < q
Calculate r = (gk mod p) mod q
Calculate s = (k-1(H(m) + x*r)) mod q
Recalculate the signature in the unlikely case that r=0 or s=0
The signature is (r,s)
The extended Euclidean algorithm can be used to compute the modular inverse k-1 mod q.
Verifying
Calculate w = (s)-1 mod q
Calculate u1 = (H(m)*w) mod q
Calculate u2 = (r*w) mod q
Calculate v = ((gu1*yu2) mod p) mod q
The signature is valid if v = r
DSA is similar to the ElGamal signature scheme.
Correctness of the algorithm
The signature scheme is correct in the sense that the verifier will always accept genuine signatures. This can be shown as follows:
First, if g = h(p–1)/q mod p it follows that gq ≡ hp-1 ≡ 1 (mod p) by Fermat's little theorem. Since g>1 and q is prime, g must have order q.
The signer computes
Thus
Since g has order q we have
Finally, the correctness of DSA follows from

HUSHMAIL

Hushmail is a web-based email service founded by Cliff Baltzley after leaving Ultimate Privacy. Hushmail offers PGP-encrypted e-mail, file storage, vanity domain service, and instant messaging (Hush Messenger). It was founded in May 1999 by Hush Communications (based in Vancouver, British Columbia, Canada, with offices in Dublin, Ireland; Delaware, United States; and Anguilla). The Hushmail.com servers are hosted in Vancouver. Hushmail uses OpenPGP standards and the source is available for download.
If public encryption keys are available to both recipient and sender (either both are Hushmail users or have uploaded PGP keys to the Hush keyserver), Hushmail can convey authenticated, encrypted messages in both directions. For recipients for whom no public key is available, Hushmail will allow a message to be encrypted by a password (with a password hint) and stored for pickup by the recipient, or the message can be sent in cleartext.
Hushmail has many added security features, such as hidden IP addresses in e-mail headers. Due to the small size of the free e-mail inbox, (2MB), and lack of IMAP or POP3 on free accounts, some computer users may prefer other e-mail solutions. Paid accounts have several hundred MB of storage as well as IMAP and POP3 access. To privacy advocates, it comes as the top recommended anonymous e-mail service by PC Magazine.
The Hushmail suite also includes a secure IM tool called Hush Messenger as well as web based key management tools.
Users must trust, to a certain extent, that Hush's equipment or software are in honest hands, and always have been. Nevertheless, the design of the software, which is largely open for inspection, removes some of this need for trust. For example, barring unknown security holes, the Hush user's private decryption keys are not normally available to the operators of Hush's equipment.

SYMMETRIC KEY CRYPTOGRAPHY

In symmetric key cryptography, both parties must possess a secret key which they must exchange prior to using any encryption. Distribution of secret keys has been problematic until recently, because it involved face-to-face meeting, use of a trusted courier, or sending the key through an existing encryption channel. The first two are often impractical and always unsafe, while the third depends on the security of a previous key exchange.
In public key cryptography, the key distribution of public keys is done through public key servers. When a person creates a key-pair, he keeps one key private and the other, public-key, is uploaded to a server where it can be accessed by anyone to send the user a private, encrypted, message. Disclosure of these public keys is not only not a problem, but is actively encouraged. The private keys are never transmitted, and can therefore be physically secured.
Secure Sockets Layer (SSL) uses Diffie-Hellman key exchange if the client does not have a public-private key pair and a published certificate in the Public Key Infrastructure, and Public Key Cryptography if the user does have both the keys and the credential.
In secret sharing a secret (password, key, trade secret,...) is used as a seed to generate a number of distinct secrets, and the pieces are distributed so that some subset of the recipients can jointly authenticate themselves and use the secret information without learning what it is. Secret sharing is also called secret splitting, key splitting, and split knowledge.
We want to share N secrets among M people so that any M < N of them (M of N) can regenerate the original information, but no smaller group up to M − 1 can do so. There are mathematical problems of this type, such as the number of points needed to identify a polynomial of a certain degree (used in Shamir's scheme), or the number of intersecting hyperplanes needed to specify a point (used in Blakley's scheme). We can hand out data specifying any number of points on the curve, or hyperplanes through the point, without altering the number needed to solve the problem and, in our application, access the protected resource.
Key distribution is an important issue in wireless sensor network (WSN) design. There are many key distribution schemes in the literature that are designed to maintain an easy and at the same time secure communication among sensor nodes. The most accepted method of key distribution is WSNs is key predistribution, where secret keys are placed in sensor nodes before deployment. When the nodes are deployed over the target area, the secret keys are used to create the network. For more info see: key distribution in wireless sensor networks.

Simple Authentication and Security Layer (SASL)

Simple Authentication and Security Layer (SASL) is a framework for authentication and data security in Internet protocols. It decouples authentication mechanisms from application protocols, in theory allowing any authentication mechanism supported by SASL to be used in any application protocol that uses SASL. Authentication mechanisms can also support proxy authorization, a facility allowing one user to assume the identity of another. Authentication mechanisms can also provide a data security layer offering data integrity and data confidentiality services. DIGEST-MD5 is an example of mechanisms which can provide a data security layer. Application protocols that support SASL typically also support Transport Layer Security (TLS) to complement the services offered by SASL.
SASL was originally specified in RFC 2222, authored by John Meyers while at Carnegie Mellon University. That document was made obsolete by RFC 4422, edited by Alexey Melnikov and Kurt Zeilenga.
SASL Mechanisms
A SASL mechanism is modelled as a series of challenges and responses. Defined SASL mechanisms [1] include:
"EXTERNAL", where authentication is implicit in the context (e.g., for protocols already using IPsec or TLS)
"ANONYMOUS", for unauthenticated guest access
"PLAIN", a simple cleartext password mechanism. PLAIN obsoleted the LOGIN mechanism.
"OTP", a one-time password mechanism. OTP obsoleted the SKEY Mechanism.
"SKEY", an S/KEY mechanism.
"CRAM-MD5", a simple challenge-response scheme based on HMAC-MD5.
"DIGEST-MD5", HTTP Digest compatible challenge-response scheme based upon MD5. DIGEST-MD5 offers a data security layer.
"NTLM", an NT LAN Manager authentication mechanism.
"GSSAPI", for Kerberos V5 authentication via the GSSAPI. GSSAPI offers a data security layer.
A family of SASL mechanisms is planned to support arbitrary GSSAPI mechanisms.
SASL-aware Application Protocols
Application protocols define their representation of SASL exchanges with a profile. A protocol has a service name such as "ldap" in a registry shared with GSSAPI and Kerberos [2]. Protocols currently supporting SASL include BEEP, IMAP, LDAP, POP, SMTP, IMSP, ACAP. and XMPP.