MD5 (Message-Digest algorithm 5)

In cryptography, MD5 (Message-Digest algorithm 5) is a widely used cryptographic hash function with a 128-bit hash value. As an Internet standard (RFC 1321), MD5 has been employed in a wide variety of security applications, and is also commonly used to check the integrity of files. An MD5 hash is typically expressed as a 32-character hexadecimal number.
MD5 was designed by Ronald Rivest in 1991 to replace an earlier hash function, MD4. In 1996, a flaw was found with the design of MD5; while it was not a clearly fatal weakness, cryptographers began to recommend using other algorithms, such as SHA-1 (which has meanwhile been found vulnerable itself). In 2004, more serious flaws were discovered making further use of the algorithm for security purposes questionable.
Vulnerability
Because MD5 makes only one pass over the data, if two prefixes with the same hash can be constructed, a common suffix can be added to both to make the collision more reasonable.
Because the current collision-finding techniques allow the preceding hash state to be specified arbitrarily, a collision can be found for any desired prefix; that is, for any given string of characters X, two colliding files can be determined which both begin with X.
All that is required to generate two colliding files is a template file, with a 128-byte block of data aligned on a 64-byte boundary, that can be changed freely by the collision-finding algorithm.
Recently, a number of projects have created MD5 "rainbow tables" which are easily accessible online, and can be used to reverse many MD5 hashes into strings that collide with the original input, usually for the purposes of password cracking. However, if passwords are combined with a salt before the MD5 digest is generated, rainbow tables become much less useful.
Applications
MD5 digests have been widely used in the software world to provide some assurance that a transferred file has arrived intact. For example, file servers often provide a pre-computed MD5 checksum for the files, so that a user can compare the checksum of the downloaded file to it. Unix-based operating systems include MD5 sum utilities in their distribution packages, whereas Windows users use third-party applications.
However, now that it is easy to generate MD5 collisions, it is possible for the person who created the file to create a second file with the same checksum, so this technique cannot protect against some forms of malicious tampering. Also, in some cases the checksum cannot be trusted (for example, if it was obtained over the same channel as the downloaded file), in which case MD5 can only provide error-checking functionality: it will recognize a corrupt or incomplete download, which becomes more likely when downloading larger files.
MD5 is widely used to store passwords. To mitigate against the vulnerabilities mentioned above, one can add a salt to the passwords before hashing them. Some implementations may apply the hashing function more than once—see key strengthening.
Algorithm
MD5 processes a variable-length message into a fixed-length output of 128 bits. The input message is broken up into chunks of 512-bit blocks; the message is padded so that its length is divisible by 512. The padding works as follows: first a single bit, 1, is appended to the end of the message. This is followed by as many zeros as are required to bring the length of the message up to 64 bits fewer than a multiple of 512. The remaining bits are filled up with a 64-bit integer representing the length of the original message.
The main MD5 algorithm operates on a 128-bit state, divided into four 32-bit words, denoted A, B, C and D. These are initialized to certain fixed constants. The main algorithm then operates on each 512-bit message block in turn, each block modifying the state. The processing of a message block consists of four similar stages, termed rounds; each round is composed of 16 similar operations based on a non-linear function F, modular addition, and left rotation. Figure 1 illustrates one operation within a round. There are four possible functions F; a different one is used in each round:




denote the XOR, AND, OR and NOT operations respectively.

Pseudocode
Pseudocode for the MD5 algorithm follows.
//Note: All variables are unsigned 32 bits and wrap modulo 2^32 when calculating
var int[64] r, k

//r specifies the per-round shift amounts
r[ 0..15] := {7, 12, 17, 22, 7, 12, 17, 22, 7, 12, 17, 22, 7, 12, 17, 22}
r[16..31] := {5, 9, 14, 20, 5, 9, 14, 20, 5, 9, 14, 20, 5, 9, 14, 20}
r[32..47] := {4, 11, 16, 23, 4, 11, 16, 23, 4, 11, 16, 23, 4, 11, 16, 23}
r[48..63] := {6, 10, 15, 21, 6, 10, 15, 21, 6, 10, 15, 21, 6, 10, 15, 21}

//Use binary integer part of the sines of integers as constants:
for i from 0 to 63
k[i] := floor(abs(sin(i + 1)) × (2 pow 32))

//Initialize variables:
var int h0 := 0x67452301
var int h1 := 0xEFCDAB89
var int h2 := 0x98BADCFE
var int h3 := 0x10325476

//Pre-processing:
append "1" bit to message
append "0" bits until message length in bits ≡ 448 (mod 512)
append bit (bit, not byte) length of unpadded message as 64-bit little-endian integer to message

//Process the message in successive 512-bit chunks:
for each 512-bit chunk of message
break chunk into sixteen 32-bit little-endian words w[i], 0 ≤ i ≤ 15

//Initialize hash value for this chunk:
var int a := h0
var int b := h1
var int c := h2
var int d := h3

//Main loop:
for i from 0 to 63
if 0 ≤ i ≤ 15 then
f := (b and c) or ((not b) and d)
g := i
else if 16 ≤ i ≤ 31
f := (d and b) or ((not d) and c)
g := (5×i + 1) mod 16
else if 32 ≤ i ≤ 47
f := b xor c xor d
g := (3×i + 5) mod 16
else if 48 ≤ i ≤ 63
f := c xor (b or (not d))
g := (7×i) mod 16

temp := d
d := c
c := b
b := b + leftrotate((a + f + k[i] + w[g]) , r[i])
a := temp

//Add this chunk's hash to result so far:
h0 := h0 + a
h1 := h1 + b
h2 := h2 + c
h3 := h3 + d

var int digest := h0 append h1 append h2 append h3 //(expressed as little-endian)
//leftrotate function definition
leftrotate (x, c)
return (x << c) or (x >> (32-c));

Note: Instead of the formulation from the original RFC 1321 shown, the following may be used for improved efficiency (useful if assembly language is being used - otherwise, the compiler will generally optimize the above code):
(0 ≤ i ≤ 15): f := d xor (b and (c xor d))
(16 ≤ i ≤ 31): f := c xor (d and (b xor c))
[edit] MD5 hashes
The 128-bit (16-byte) MD5 hashes (also termed message digests) are typically represented as a sequence of 32 hexadecimal digits. The following demonstrates a 43-byte ASCII input and the corresponding MD5 hash:
MD5("The quick brown fox jumps over the lazy dog")
= 9e107d9d372bb6826bd81d3542a419d6
Even a small change in the message will (with overwhelming probability) result in a completely different hash, due to the avalanche effect. For example, changing d to e:
MD5("The quick brown fox jumps over the lazy eog")
= ffd93f16876049265fbaef4da268dd0e
The hash of the zero-length string is:
MD5("")
= d41d8cd98f00b204e9800998ecf8427e

Pretty Good Privacy (PGP)

How PGP encryption works
PGP encryption uses public-key cryptography and includes a system which binds the public keys to a user name.
Encryption/decryption
PGP message encryption normally uses both asymmetric key encryption and symmetric key encryption algorithms.
Commonly, when encrypting a message, the sender uses the public key half of the recipient's key pair to encrypt a symmetric cipher session key. That session key is used, in turn, to encrypt the plaintext of the message. There are several other operational modes (eg, symmetric key operation only), but these are less commonly used.
The recipient of a PGP-encrypted message decrypts the session key using his private key (the session key was encrypted by the sender using his public key). Next, he decrypts the ciphertext of the message using the session key.
Use of two ciphers in this way was chosen, despite higher complication, in part because of the very considerable difference in operating speed between asymmetric key and symmetric key ciphers (the difference is often a factor of 1000 or more). This approach also makes it easily possible to send the same encrypted message to two or more recipients.
The entire encryption and decryption operations are completely automated in current PGP desktop versions. Many PGP users' public keys are available to all from the many PGP key servers around the world, most of which coordinate their records so as to act as mirror sites for each other.
Digital signatures
A similar strategy is used to detect whether a message has been altered since it was completed (the message integrity property), and whether it was actually sent by the person/entity claimed to be the sender (a digital signature). In PGP, it is used by default in conjunction with encryption, but can be applied to plaintext as well. The sender uses PGP to create a digital signature for the message with either the RSA or DSA signature algorithms. To do so, PGP computes a hash (also called a message digest) from the plaintext, and then creates the digital signature from that hash using the sender's private key.
The message recipient uses the sender's public key and the digital signature to recover the original message digest. He compares this message digest with the message digest he computed himself from the (recovered) plaintext. If the signature matches the received plaintext's message digest, it must be presumed (to a very high degree of confidence) that the message received has not been tampered with, either deliberately or accidentally. As well, since it was properly signed, it is very likely (to a very high degree of confidence) that the claimed sender actually did send it.
Web of trust
Both when encrypting messages and when verifying signatures, it is critical that the public key one uses to send messages to someone or some entity actually does 'belong' to the intended recipient. Simply downloading a public key from somewhere is not overwhelming assurance of that association; deliberate (or accidental) spoofing is possible. PGP has, from its first versions, always included provisions for distributing a user's public keys in an 'identity certificate' which is so constructed cryptographically that any tampering (or accidental garble) is readily detectable. But merely making a certificate effectively impossible to modify undetectably is also insufficient. It can prevent corruption only after the certificate has been created, not before. Users must also ensure by some means that the public key in a certificate actually does belong to the person/entity claiming it. From its first release, PGP products have included an internal certificate 'vetting scheme' to assist with this; it has been called a web of trust. A given public key (or more specifically, information binding a user name to a key) may be digitally signed by a third party user to attest to the association between someone (actually a user name) and the key. There are several levels of confidence which can be included in such signatures. Although many programs read and write this information, few (if any) include this level of certification when calculating whether to trust a key.
The web of trust protocol was first described by Zimmermann in the manual for PGP version 2.0:
As time goes on, you will accumulate keys from other people that you may want to designate as trusted introducers. Everyone else will each choose their own trusted introducers. And everyone will gradually accumulate and distribute with their key a collection of certifying signatures from other people, with the expectation that anyone receiving it will trust at least one or two of the signatures. This will cause the emergence of a decentralized fault-tolerant web of confidence for all public keys.
The web of trust mechanism has advantages over a centrally managed PKI scheme, but has not been universally used. Users have been willing to accept certificates and check their validity manually, or to simply accept them. The underlying problem has found no satisfactory solution.
Certificates
In the (more recent) OpenPGP specification, trust signatures can be used to support creation of certificate authorities. A trust signature indicates both that the key belongs to its claimed owner and that the owner of the key is trustworthy to sign other keys at one level below their own. A level 0 signature is comparable to a web of trust signature, since only the validity of the key is certified. A level 1 signature is similar to the trust one has in a certificate authority because a key signed to level 1 is able to issue an unlimited number of level 0 signatures. A level 2 signature is highly analogous to the trust assumption users must rely on whenever they use the default certificate authority list (like those included in web browsers); it allows the owner of the key to make other keys certificate authorities.
PGP versions have always included a way to cancel ('revoke') identity certificates. A lost or compromised private key will require this if communication security is to be retained by that user. This is, more or less, equivalent to the certificate revocation lists of centralized PKI schemes. Recent PGP versions have also supported certificate expiration dates.
The problem of correctly identifying a public key as belonging to a particular user is not unique to PGP. All public key / private key cryptosystems have the same problem, if in slightly different guise, and no fully satisfactory solution is known. PGP's original scheme, at least, leaves the decision whether or not to use its endorsement/vetting system to the user, while most other PKI schemes do not, requiring instead that every certificate attested to by a central certificate authority be accepted as correct.
Security quality
To the best of publicly available information, there is no known method which will allow a person or group to break PGP encryption by cryptographic, or computational means. Early versions of PGP have been found to have theoretical vulnerabilities and so current versions are recommended. Indeed, in 1996, cryptographer Bruce Schneier characterized an early version as being "the closest you're likely to get to military-grade encryption."[1] In contrast to security systems/protocols like SSL which only protect data in transit over a network, PGP encryption can also be used to protect data in long-term data storage such as disk files.
The cryptographic security of PGP encryption depends on the assumption that the algorithms used are unbreakable by direct cryptanalysis with current equipment and techniques. For instance, in the original version, the RSA algorithm was used to encrypt session keys; RSA's security depends upon the one-way function nature of mathematical integer factoring. New, now unknown, integer factorization techniques might, therefore, make breaking RSA easier than now, or perhaps even trivially easy. However, it is generally presumed by informed observers that this is an intractable problem, and likely to remain so. Likewise, the secret key algorithm used in PGP version 2 was IDEA, which might, at some future time, be found to have a previously unsuspected cryptanalytic flaw. Specific instances of current PGP, or IDEA, insecurities -— if they exist -— are not publicly known. As current versions of PGP have added additional encryption algorithms, the degree of their cryptographic vulnerability varies with the algorithm used. In practice, each of the algorithms in current use is not publicly known to have cryptanalytic weaknesses.

CRYPTOGRAPHY

CRYPTOGRAPHY
Cryptography is, traditionally, the study of ways to convert information from its normal, comprehensible form into an obscured guise, unreadable without special knowledge — the practice of encryption. In the past, cryptography helped ensure secrecy in important communications, such as those of spies, military leaders, and diplomats. In recent decades, the field of cryptography has expanded its remit. Examples include schemes like digital signatures and digital cash, digital rights management for intellectual property protection, and securing electronic commerce. Cryptography is now often built into the infrastructure for computing and telecommunications; users may not even be aware of its presence.
In cryptology, RSA is an algorithm for public-key cryptography. It was the first algorithm known to be suitable for signing as well as encryption, and one of the first great advances in public key cryptography. RSA is widely used in electronic commerce protocols, and is believed to be secure given sufficiently long keys and the use of up-to-date implementations.


Padding schemes
When used in practice, RSA is generally combined with some padding scheme. The goal of the padding scheme is to prevent an number of attacks that potentially work against RSA without padding:
• When encrypting with low encryption exponents (e.g., e = 3) and small values of the m, (i.e. m is less than n1/e) the result of me is strictly less than the modulus n. In this case, ciphertexts can be easily decrypted by taking the eth root of the ciphertext over the integers.
• Because RSA encryption is a deterministic encryption algorithm – i.e., has no random component – an attacker can successfully launch a chosen plaintext attack against the cryptosystem, by encrypting likely plaintexts under the public key and test if they are equal to the ciphertext. A cryptosystem is called semantically secure if an attacker cannot distinguish two encryptions from each other even if the attacker knows (or has chosen) the corresponding plaintexts. As described abouve, RSA without padding is not semantically secure.
• RSA has the property that the product of to ciphertexts is equal to the encryption of the product of the respective plaintexts. That is Because of this multiplicatvive property a chosen-ciphertext attack is possible. E.g. an attacker, who wants to know the decryption of a ciphertext c=me mod n may ask the holder of the secret key to decrypt an unsuspiciously looking ciphertext cremod n for some value r chosen by the attacker. Because of the multiplicative property this is the encryption of mrmod n. Hence, if the attacker is successful with the attack, he will learn mrmod n from which he can derive the message m by multiplying mr with the modular inverse of r modulo n.
To avoid these problems, practical RSA implementations typically embed some form of structured, randomized padding into the value m before encrypting it. This padding ensures that m does not fall into the range of insecure plaintexts, and that a given message, once padded, will encrypt to one of a large number of different possible ciphertexts.
Standards such as PKCS have been carefully designed to securely pad messages prior to RSA encryption. Because these schemes pad the plaintext m with some number of additional bits, the size of the un-padded message M must be somewhat smaller. RSA padding schemes must be carefully designed so as to prevent sophisticated attacks which may be facilitated by a predictable message structure. Early versions of the PKCS standard (i.e. PKCS #1 up to version 1.5) used a construction that turned RSA into a semantically secure encryption scheme. This version was later found vulnerable to a practical adaptive chosen ciphertext attack. Later versions of the standard include Optimal Asymmetric Encryption Padding (OAEP), which prevents these attacks. The PKCS standard also incorporates processing schemes designed to provide additional security for RSA signatures, e.g., the Probabilistic Signature Scheme for RSA (RSA-PSS).
Signing messages
Suppose Alice uses Bob's public key to send him an encrypted message. In the message, she can claim to be Alice but Bob has no way of verifying that the message was actually from Alice since anyone can use Bob's public key to send him encrypted messages. So, in order to verify the origin of a message, RSA can also be used to sign a message.
Suppose Alice wishes to send a signed message to Bob. She produces a hash value of the message, raises it to the power of d mod n (as she does when decrypting a message), and attaches it as a "signature" to the message. When Bob receives the signed message, he raises the signature to the power of e mod n (as he does when encrypting a message), and compares the resulting hash value with the message's actual hash value. If the two agree, he knows that the author of the message was in possession of Alice's secret key, and that the message has not been tampered with since.
Note that secure padding schemes such as RSA-PSS are as essential for the security of message signing as they are for message encryption, and that the same key should never be used for both encryption and signing purposes.

Digital Signature Algorithm

The Digital Signature Algorithm (DSA) is a United States Federal Government standard or FIPS for digital signatures. It was proposed by the National Institute of Standards and Technology (NIST) in August 1991 for use in their Digital Signature Standard (DSS), specified in FIPS 186 [1], adopted in 1993. A minor revision was issued in 1996 as FIPS 186-1 [2], and the standard was expanded further in 2000 as FIPS 186-2 [3].
DSA is covered by U.S. Patent 5,231,668 , filed July 26, 1991, and attributed to David W. Kravitz, a former NSA employee. This patent was given to "The United States of America as represented by the Secretary of Commerce, Washington, D.C." and the NIST has made this patent available world-wide royalty-free. [4] Dr. Claus P. Schnorr claims that his U.S. Patent 4,995,082 covers DSA; this claim is disputed.
Key generation
Key generation has two phases. The first phase is a choice of algorithm parameters which may be shared between different users of the system:
Choose a cryptographic hash function H. In the original DSS, H was always SHA-1, but stronger hash functions from the SHA family are also in use. Sometimes the output of a newer hash function is truncated to the size of an older one for compatibility with existing key pairs.
Decide on a key length L. This is the primary measure of the cryptographic strength of the key. The original DSS constrained L to be a multiple of 64 between 512 and 1024 (inclusive). Later, FIPS-186-2, change notice 1 specifies that L should always be 1024. Later yet, NIST 800-57 recommends lengths of 2048 (or 3072) for keys with security lifetimes extending beyond 2010 (or 2030).
Choose a prime q with the same number of bits as the output of H.
Choose a L-bit prime p such that p–1 is a multiple of q.
Choose g, a number whose multiplicative order modulo p is q. This may be done by setting g = h(p–1)/q mod p for some arbitrary h (1 < h < p-1), and trying again if the result comes out as 1. Most choices of h will lead to a usable g; commonly h=2 is used.
The algorithm parameters (p, q, g) may be shared between different users of the system. The second phase computes private and public keys for a single user:
Choose x by some random method, where 0 < x < q.
Calculate y = gx mod p.
Public key is (p, q, g, y). Private key is x.
The forthcoming FIPS 186-3 (available as a draft [5]) uses SHA-224/256/384/512 as the hash function, q of size 224 and 256 bits, and L equal to 2048 and 3072, respectively.
There exist efficient algorithms for computing the modular exponentiations ha mod p and gx mod p.
Signing
Generate a random per-message value k where 0 < k < q
Calculate r = (gk mod p) mod q
Calculate s = (k-1(H(m) + x*r)) mod q
Recalculate the signature in the unlikely case that r=0 or s=0
The signature is (r,s)
The extended Euclidean algorithm can be used to compute the modular inverse k-1 mod q.
Verifying
Calculate w = (s)-1 mod q
Calculate u1 = (H(m)*w) mod q
Calculate u2 = (r*w) mod q
Calculate v = ((gu1*yu2) mod p) mod q
The signature is valid if v = r
DSA is similar to the ElGamal signature scheme.
Correctness of the algorithm
The signature scheme is correct in the sense that the verifier will always accept genuine signatures. This can be shown as follows:
First, if g = h(p–1)/q mod p it follows that gq ≡ hp-1 ≡ 1 (mod p) by Fermat's little theorem. Since g>1 and q is prime, g must have order q.
The signer computes
Thus
Since g has order q we have
Finally, the correctness of DSA follows from

HUSHMAIL

Hushmail is a web-based email service founded by Cliff Baltzley after leaving Ultimate Privacy. Hushmail offers PGP-encrypted e-mail, file storage, vanity domain service, and instant messaging (Hush Messenger). It was founded in May 1999 by Hush Communications (based in Vancouver, British Columbia, Canada, with offices in Dublin, Ireland; Delaware, United States; and Anguilla). The Hushmail.com servers are hosted in Vancouver. Hushmail uses OpenPGP standards and the source is available for download.
If public encryption keys are available to both recipient and sender (either both are Hushmail users or have uploaded PGP keys to the Hush keyserver), Hushmail can convey authenticated, encrypted messages in both directions. For recipients for whom no public key is available, Hushmail will allow a message to be encrypted by a password (with a password hint) and stored for pickup by the recipient, or the message can be sent in cleartext.
Hushmail has many added security features, such as hidden IP addresses in e-mail headers. Due to the small size of the free e-mail inbox, (2MB), and lack of IMAP or POP3 on free accounts, some computer users may prefer other e-mail solutions. Paid accounts have several hundred MB of storage as well as IMAP and POP3 access. To privacy advocates, it comes as the top recommended anonymous e-mail service by PC Magazine.
The Hushmail suite also includes a secure IM tool called Hush Messenger as well as web based key management tools.
Users must trust, to a certain extent, that Hush's equipment or software are in honest hands, and always have been. Nevertheless, the design of the software, which is largely open for inspection, removes some of this need for trust. For example, barring unknown security holes, the Hush user's private decryption keys are not normally available to the operators of Hush's equipment.

SYMMETRIC KEY CRYPTOGRAPHY

In symmetric key cryptography, both parties must possess a secret key which they must exchange prior to using any encryption. Distribution of secret keys has been problematic until recently, because it involved face-to-face meeting, use of a trusted courier, or sending the key through an existing encryption channel. The first two are often impractical and always unsafe, while the third depends on the security of a previous key exchange.
In public key cryptography, the key distribution of public keys is done through public key servers. When a person creates a key-pair, he keeps one key private and the other, public-key, is uploaded to a server where it can be accessed by anyone to send the user a private, encrypted, message. Disclosure of these public keys is not only not a problem, but is actively encouraged. The private keys are never transmitted, and can therefore be physically secured.
Secure Sockets Layer (SSL) uses Diffie-Hellman key exchange if the client does not have a public-private key pair and a published certificate in the Public Key Infrastructure, and Public Key Cryptography if the user does have both the keys and the credential.
In secret sharing a secret (password, key, trade secret,...) is used as a seed to generate a number of distinct secrets, and the pieces are distributed so that some subset of the recipients can jointly authenticate themselves and use the secret information without learning what it is. Secret sharing is also called secret splitting, key splitting, and split knowledge.
We want to share N secrets among M people so that any M < N of them (M of N) can regenerate the original information, but no smaller group up to M − 1 can do so. There are mathematical problems of this type, such as the number of points needed to identify a polynomial of a certain degree (used in Shamir's scheme), or the number of intersecting hyperplanes needed to specify a point (used in Blakley's scheme). We can hand out data specifying any number of points on the curve, or hyperplanes through the point, without altering the number needed to solve the problem and, in our application, access the protected resource.
Key distribution is an important issue in wireless sensor network (WSN) design. There are many key distribution schemes in the literature that are designed to maintain an easy and at the same time secure communication among sensor nodes. The most accepted method of key distribution is WSNs is key predistribution, where secret keys are placed in sensor nodes before deployment. When the nodes are deployed over the target area, the secret keys are used to create the network. For more info see: key distribution in wireless sensor networks.

Simple Authentication and Security Layer (SASL)

Simple Authentication and Security Layer (SASL) is a framework for authentication and data security in Internet protocols. It decouples authentication mechanisms from application protocols, in theory allowing any authentication mechanism supported by SASL to be used in any application protocol that uses SASL. Authentication mechanisms can also support proxy authorization, a facility allowing one user to assume the identity of another. Authentication mechanisms can also provide a data security layer offering data integrity and data confidentiality services. DIGEST-MD5 is an example of mechanisms which can provide a data security layer. Application protocols that support SASL typically also support Transport Layer Security (TLS) to complement the services offered by SASL.
SASL was originally specified in RFC 2222, authored by John Meyers while at Carnegie Mellon University. That document was made obsolete by RFC 4422, edited by Alexey Melnikov and Kurt Zeilenga.
SASL Mechanisms
A SASL mechanism is modelled as a series of challenges and responses. Defined SASL mechanisms [1] include:
"EXTERNAL", where authentication is implicit in the context (e.g., for protocols already using IPsec or TLS)
"ANONYMOUS", for unauthenticated guest access
"PLAIN", a simple cleartext password mechanism. PLAIN obsoleted the LOGIN mechanism.
"OTP", a one-time password mechanism. OTP obsoleted the SKEY Mechanism.
"SKEY", an S/KEY mechanism.
"CRAM-MD5", a simple challenge-response scheme based on HMAC-MD5.
"DIGEST-MD5", HTTP Digest compatible challenge-response scheme based upon MD5. DIGEST-MD5 offers a data security layer.
"NTLM", an NT LAN Manager authentication mechanism.
"GSSAPI", for Kerberos V5 authentication via the GSSAPI. GSSAPI offers a data security layer.
A family of SASL mechanisms is planned to support arbitrary GSSAPI mechanisms.
SASL-aware Application Protocols
Application protocols define their representation of SASL exchanges with a profile. A protocol has a service name such as "ldap" in a registry shared with GSSAPI and Kerberos [2]. Protocols currently supporting SASL include BEEP, IMAP, LDAP, POP, SMTP, IMSP, ACAP. and XMPP.

Transport Layer Security (TLS)

Transport Layer Security (TLS) and its predecessor, Secure Sockets Layer (SSL), are cryptographic protocols that provide secure communications on the Internet for such things as web browsing, e-mail, Internet faxing, instant messaging and other data transfers. There are slight differences between SSL and TLS, but the protocol remains substantially the same. The term "TLS" as used here applies to both protocols unless clarified by context.
Description
The TLS protocol allows applications to communicate across a network in a way designed to prevent eavesdropping, tampering, and message forgery. TLS provides endpoint authentication and communications privacy over the Internet using cryptography. Typically, only the server is authenticated (i.e., its identity is ensured) while the client remains unauthenticated; this means that the end user (whether an individual or an application, such as a Web browser) can be sure with whom they are communicating. The next level of security—in which both ends of the "conversation" are sure with whom they are communicating—is known as mutual authentication. Mutual authentication requires public key infrastructure (PKI) deployment to clients unless TLS-PSK or TLS-SRP are used, which provide strong mutual authentication without needing to deploy a PKI.
TLS involves three basic phases:
Peer negotiation for algorithm support
Public key exchange and certificate-based authentication
Symmetric cipher encryption
During the first phase, the client and server negotiate cipher suites, which combine one cipher from each of the following:
Public-key cryptography: RSA, Diffie-Hellman, DSA
Symmetric ciphers: RC2, RC4, IDEA, DES, Triple DES, AES or Camellia
Cryptographic hash function: MD2, MD4, MD5 or SHA
How it works
A TLS client and server negotiate a stateful connection by using a handshaking procedure. During this handshake, the client and server agree on various parameters used to establish the connection's security.
The handshake begins when a client connects to a TLS-enabled server requesting a secure connection, and presents a list of ciphers and hash functions.
From this list, the server picks the strongest cipher and hash function that it also supports and notifies the client of the decision.
The server sends back its identification in the form of a digital certificate. The certificate will usually contain the server name, the trusted certificate authority (CA), and the server's public encryption key.
The client may contact the server of the trusted CA and confirm that the certificate is authentic before proceeding.
In order to generate the session keys used for the secure connection, the client encrypts a random number with the server's public key, and sends the result to the server. Only the server can decrypt it (with its private key): this is the one fact that makes the keys hidden from third parties, since only the server and the client have access to this data.
Both parties generate key material for encryption and decryption.
This concludes the handshake and begins the secured connection, which is encrypted and decrypted with the key material until the connection closes.
If any one of the above steps fails, the TLS handshake fails, and the connection is not created.
TLS Handshake in Detail
The TLS protocol exchanges records that encapsulate the data to be exchanged. Each record can be compressed, padded, appended with a message authentication code (MAC), or encrypted, all depending on the state of the connection. Each record has a content type field that specifies the record, a length field, and a TLS version field.
When the connection starts, the record encapsulates another protocol, the handshake protocol, which has content type 22.
A simple connection example follows:
A Client sends a ClientHello message specifying the highest TLS protocol version it supports, a random number, a list of suggested cipher suites and compression methods.
The Server responds with a ServerHello, containing the chosen protocol version, a random number, cipher suite, and compression method from the choices offered by the client.
The Server sends its Certificate (depending on the selected cipher suite, this may be omitted by the Server).
These certificates are currently X.509, but there is also a draft specifying the use of OpenPGP based certificates.
The server may request a certificate from the client, so that the connection can be mutually authenticated, using a CertificateRequest.
The Server sends a ServerHelloDone message, indicating it is done with handshake negotiation.
The Client responds with a ClientKeyExchange message, which may contain a PreMasterSecret, public key, or nothing. (Again, this depends on the selected cipher.)
The Client and Server then use the random numbers and PreMasterSecret to compute a common secret, called the "master secret". All other key data is derived from this master secret (and the client- and server-generated random values), which is passed through a carefully designed "pseudorandom function".
The Client now sends a ChangeCipherSpec message, essentially telling the Server, "Everything I tell you from now on will be encrypted." Note that the ChangeCipherSpec is itself a record-level protocol, and has type 20, and not 22.
Finally, the Client sends an encrypted Finished message, containing a hash and MAC over the previous handshake messages.
The Server will attempt to decrypt the Client's Finished message, and verify the hash and MAC. If the decryption or verification fails, the handshake is considered to have failed and the connection should be torn down.
Finally, the Server sends a ChangeCipherSpec and its encrypted Finished message, and the Client performs the same decryption and verification.
At this point, the "handshake" is complete and the Application protocol is enabled, with content type of 23. Application messages exchanged between Client and Server will be encrypted.
Security
TLS/SSL have a variety of security measures:
The client may use the CA's public key to validate the CA's digital signature on the server certificate. If the digital signature can be verified, the client accepts the server certificate as a valid certificate issued by a trusted CA.
The client verifies that the issuing Certificate Authority (CA) is on its list of trusted CAs.
The client checks the server's certificate validity period. The authentication process stops if the current date and time fall outside of the validity period.
To protect against Man-in-the-Middle attacks, the client compares the actual DNS name of the server to the DNS name on the certificate. Browser-dependent, not defined by TLS.
Protection against a downgrade of the protocol to a previous (less secure) version or a weaker cipher suite.
Numbering all the Application records with a sequence number, and using this sequence number in the MACs.
Using a message digest enhanced with a key (so only a key-holder can check the MAC). This is specified in RFC 2104. TLS only.
The message that ends the handshake ("Finished") sends a hash of all the exchanged handshake messages seen by both parties.
The pseudorandom function splits the input data in half and processes each one with a different hashing algorithm (MD5 and SHA-1), then XORs them together. This provides protection if one of these algorithms is found to be vulnerable. TLS only.
SSL v3 improved upon SSL v2 by adding SHA-1 based ciphers, and support for certificate authentication. Additional improvements in SSL v3 include better handshake protocol flow and increased resistance to man-in-the-middle attacks.
Applications
TLS runs on layers beneath application protocols such as HTTP, FTP, SMTP, NNTP, and XMPP and above a reliable transport protocol, TCP for example. While it can add security to any protocol that uses reliable connections (such as TCP), it is most commonly used with HTTP to form HTTPS. HTTPS is used to secure World Wide Web pages for applications such as electronic commerce and asset management. SMTP is also an area in which TLS has been growing and is specified in RFC 3207. These applications use public key certificates to verify the identity of endpoints.
An increasing number of client and server products support TLS natively, but many still lack support. As an alternative, users may wish to use standalone TLS products like Stunnel. Wrappers such as Stunnel rely on being able to obtain a TLS connection immediately, by simply connecting to a separate port reserved for the purpose. For example, by default the TCP port for HTTPS is 443, to distinguish it from HTTP on port 80.
TLS can also be used to tunnel an entire network stack to create a VPN, as is the case with OpenVPN. Many vendors now marry TLS's encryption and authentication capabilities with authorization. There has also been substantial development since the late 1990s in creating client technology outside of the browser to enable support for client/server applications. When compared against traditional IPsec VPN technologies, TLS has some inherent advantages in firewall and NAT traversal that make it easier to administer for large remote-access populations.
TLS is also increasingly being used as the standard method for protecting SIP application signaling. TLS can be used to provide authentication and encryption of the SIP signalling associated with VOIP (Voice over IP) and other SIP-based applications.

Physical Access

Physical access
Physical access of a person may be allowed depending on payment, authorization, etc. Also there may be one-way traffic of people. These can be enforced by personnel such as a border guard, a doorman, a ticket checker, etc., or with a device such as a turnstile. There may be fences to avoid circumventing this access control. An alternative of access control in the strict sense (physically controlling access itself) is a system of checking authorized presence, see e.g. Ticket controller (transportation). A variant is exit control, e.g. of a shop (checkout) or a country.
In physical security, the term access control refers to the practice of restricting entrance to a property, a building, or a room to authorized persons. Physical access control can be achieved by a human (a guard, bouncer, or receptionist), through mechanical means such as locks and keys, or through technological means such as a card access system.
Computer security
In computer security, access control includes authentication, authorization and audit. It also includes measures such as physical devices, including biometric scans and metal locks, hidden paths, digital signatures, encryption, social barriers, and monitoring by humans and automated systems.
In any access control model, the entities that can perform actions in the system are called subjects, and the entities representing resources to which access may need to be controlled are called objects (see also Access Control Matrix). Subjects and objects should both be considered as software entities, rather than as human users: any human user can only have an effect on the system via the software entities that they control. Although some systems equate subjects with user IDs, so that all processes started by a user by default have the same authority, this level of control is not fine-grained enough to satisfy the Principle of least privilege, and arguably is responsible for the prevalence of malware in such systems (see computer insecurity).
In some models, for example the object-capability model, any software entity can potentially act as both a subject and object.
Access control models used by current systems tend to fall into one of two classes: those based on capabilities and those based on access control lists (ACLs). In a capability-based model, holding an unforgeable reference or capability to an object provides access to the object (roughly analogous to how possession of your house key grants you access to your house); access is conveyed to another party by transmitting such a capability over a secure channel. In an ACL-based model, a subject's access to an object depends on whether its identity is on a list associated with the object (roughly analogous to how a bouncer at a private party would check your ID to see if your name is on the guest list); access is conveyed by editing the list. (Different ACL systems have a variety of different conventions regarding who or what is responsible for editing the list and how it is edited.)
Both capability-based and ACL-based models have mechanisms to allow access rights to be granted to all members of a group of subjects (often the group is itself modeled as a subject).
Access control systems provide the essential services of identification and authentication (I&A), authorization, and accountability where:
identification and authentication determine who can log on to a system, and the association of users with the software subjects that they able to control as a result of logging in;
authorization determines what a subject can do;
accountability identifies what a subject (or all subjects associated with a user) did.
Identification and authentication (I&A)
Identification and authentication (I&A) is a two-step process that determines who can log on to a system. Identification is how a user tells a system who he or she is (for example, by using a username). The identification component of an access control system is normally a relatively simple mechanism based on either Username or User ID. In the case of a system or process, identification is usually based on:
Computer name
Media Access Control (MAC) address
Internet Protocol (IP) address
Process ID (PID)
The only requirements for identification are that the identification:
Must uniquely identify the user.
Shouldn't identify that user's position or relative importance in an organization (such as labels like president or CEO).
Should avoid using common or shared user accounts, such as root, admin, and sysadmin. Such accounts provide no accountability and are juicy targets for hackers.
Authentication is the process of verifying a user's claimed identity (for example, by comparing an entered password to the password stored on a system for a given username).
Authentication is based on at least one of these four factors:
Something you know, such as a password or a personal identification number (PIN). This assumes that only the owner of the account knows the password or PIN needed to access the account.
Something you have, such as a smart card or token. This assumes that only the owner of the account has the necessary smart card or token needed to unlock the account.
Something you are, such as fingerprint, voice, retina, or iris characteristics.
Where you are, for example inside or outside a company firewall, or proximity of login location to a personal GPS device.
Authorization
Authorization applies to subjects rather than to users (the association between a user and the subjects initially controlled by that user having been determined by I&A). Authorization determines what a subject can do on the system.
Most modern operating systems define sets of permissions that are variations or extensions of three basic types of access:
Read (R): The subject can
Read file contents
List directory contents
Write (W): The subject can change the contents of a file or directory with these tasks:
Add
Create
Delete
Rename
Execute (X): If the file is a program, the subject can cause the program to be run. (In Unix systems, the 'execute' permission doubles as a 'traverse directory' permission when granted for a directory.)
These rights and permissions are implemented differently in systems based on discretionary access control (DAC) and mandatory access control (MAC).
Accountability
Accountability uses such system components as audit trails (records) and logs to associate a subject with its actions. The information recorded should be sufficient to map the subject to a controlling user. Audit trails and logs are important for
Detecting security violations
Re-creating security incidents
If no one is regularly reviewing your logs and they are not maintained in a secure and consistent manner, they may not be admissible as evidence.
Many systems can generate automated reports based on certain predefined criteria or thresholds, known as clipping levels. For example, a clipping level may be set to generate a report for the following:
More than three failed logon attempts in a given period
Any attempt to use a disabled user account
These reports help a system administrator or security administrator to more easily identify possible break-in attempts.
Access Control Techniques
Access control techniques are sometimes categorized as either discretionary or mandatory.
Discretionary Access Control
Discretionary access control (DAC) is an access policy determined by the owner of an object. The owner decides who is allowed access to the object and what privileges they have.
Two important concepts in DAC are
File and data ownership: Every object in the system has an owner. In most DAC systems, each object's initial owner is the subject that caused it to be created. The access policy for an object is determined by its owner.
Access rights and permissions: These are the controls that an owner can assign to other subjects for specific resources.
Access controls may be discretionary in ACL-based, capability-based, or Role-based access control systems. (In capability-based systems, there is usually no explicit concept of 'owner', but the creator of an object has a similar degree of control over its access policy.)
Mandatory Access Control
Mandatory access control (MAC) is an access policy determined by the system, not the owner. MAC is used in multilevel systems that process highly sensitive data, such as classified government and military information. A multilevel system is a single computer system that handles multiple classification levels between subjects and objects.
Sensitivity labels: In a MAC-based system, all subjects and objects must have labels assigned to them. A subject's sensitivity label specifies its level of trust. An object's sensitivity label specifies the level of trust required for access. In order to access a given object, the subject must have a sensitivity level equal to or higher than the requested object.
Data import and export: Controlling the import of information from other systems and export to other systems (including printers) is a critical function of MAC-based systems, which must ensure that sensitivity labels are properly maintained and implemented so that sensitive information is appropriately protected at all times.
Two methods are commonly used for applying mandatory access control:
Rule-based access controls: This type of control further defines specific conditions for access to a requested object. All MAC-based systems implement a simple form of rule-based access control to determine whether access should be granted or denied by matching:
An object's sensitivity label
A subject's sensitivity label
Lattice-based access controls: These can be used for complex access control decisions involving multiple objects and/or subjects. A lattice model is a mathematical structure that defines greatest lower-bound and least upper-bound values for a pair of elements, such as a subject and an object.
Few systems implement MAC. XTS-400 is an example of one that does.
Telecommunication
In telecommunication, the term access control is defined in U.S. Federal Standard 1037C [1] with the following meanings:
A service feature or technique used to permit or deny use of the components of a communication system.
A technique used to define or restrict the rights of individuals or application programs to obtain data from, or place data onto, a storage device.
The definition or restriction of the rights of individuals or application programs to obtain data from, or place data into, a storage device.
The process of limiting access to the resources of an AIS to authorized users, programs, processes, or other systems.
That function performed by the resource controller that allocates system resources to satisfy user requests.