Public Cloud API Best Practices

10 minutes of slides

15 minutes of demo

 

Link | Posted on by | Tagged , , , , , , , , , | Leave a comment

JS Widgets For Rapid API Integration into Existing Web Sites

Twitter has made it even easier to integrate with their search tweets API with a few javascript widgets that you can quickly insert into the HTML of your website or app to include tweets alongside your application or content.  

For example, in just a few lines of code you can include tweets with the search term in a customizable javascript search widget. 

<script charset=”utf-8″ src=”http://widgets.twimg.com/j/2/widget.js”></script&gt;
<script>
new TWTR.Widget({
  version: 2,
  type: ‘search’,
  search: ‘FINDME’,
  interval: 30000,
  title: ‘TITLE’,
  subject: ‘SUBTITLE’,
  width: 250,
  height: 300,
  theme: {
    shell: {
      background: ‘#8ec1da’,
      color: ‘#ffffff’
    },
    tweets: {
      background: ‘#ffffff’,
      color: ‘#444444′,
      links: ‘#1985b5′
    }
  },
  features: {
    scrollbar: false,
    loop: true,
    live: true,
    behavior: ‘default’
  }
}).render().start();
</script>

source: https://twitter.com/about/resources/widgets/widget_search

Posted in Uncategorized | Leave a comment

Keep IT Private

To do so, use a crypto stack like Gnu Privacy Guard (gpg).

0. Get gpg from http://www.gnupg.org/download/index.en.html

1. Create your Gnu Privacy Guard signed crypto key.

  • gpg –key-gen
  • Note: new in version 2.x:  executable is now: gpg2 and the –key-gen param is now –gen-key

2. To decrypt a file encrypted with the .gpg extension

  • gpg -d FileToDecrypt.tar.gpg >>decrypted.tar

3. To send an encrypted file to yourself generate a public key for your email address.

To generate your own public key.

  • ssh-keygen -t rsa -C “me@domain.com”

4. To encrypt a file using gpg you need the public key of the recipient.

Ask anyone who wishes to collaborate privately with you for their public key. No you never need their private key, nor should you ever send anyone your private key nor secret passphrase.

  • gpg -e -r foryou@domain.com file.xyz

Note that the public key of the recipient needs to exist in the PATH.

For more help:

How To Guide: http://www.dewinter.com/gnupg_howto/english/GPGMiniHowto.html

Cheat Sheet of Commands: http://irtfweb.ifa.hawaii.edu/~lockhart/gpg/gpg-cs.html

Command Line parameters for gpg2

-s, –sign make a signature
–clearsign make a clear text signature
-b, –detach-sign make a detached signature
-e, –encrypt encrypt data
-c, –symmetric encryption only with symmetric cipher
-d, –decrypt decrypt data (default)
–verify verify a signature
-k, –list-keys list keys
–list-sigs list keys and signatures
–check-sigs list and check key signatures
–fingerprint list keys and fingerprints
-K, –list-secret-keys list secret keys
–gen-key generate a new key pair
–gen-revoke generate a revocation certificate
–delete-keys remove keys from the public keyring
–delete-secret-keys remove keys from the secret keyring
–sign-key sign a key
–lsign-key sign a key locally
–edit-key sign or edit a key
–passwd change a passphrase
–export export keys
–send-keys export keys to a key server
–recv-keys import keys from a key server
–search-keys search for keys on a key server
–refresh-keys update all keys from a keyserver
–import import/merge keys
–card-status print the card status
–card-edit change data on a card
–change-pin change a card’s PIN
–update-trustdb update the trust database
–print-md print message digests
–server run in server mode

Options:

-a, –armor create ascii armored output
-r, –recipient USER-ID encrypt for USER-ID
-u, –local-user USER-ID use USER-ID to sign or decrypt
-z N set compress level to N (0 disables)
–textmode use canonical text mode
-o, –output FILE write output to FILE
-v, –verbose verbose
-n, –dry-run do not make any changes
-i, –interactive prompt before overwriting
–openpgp use strict OpenPGP behavior

(See the man page for a complete listing of all commands and options)

Examples:

-se -r Bob [file] sign and encrypt for user Bob
–clearsign [file] make a clear text signature
–detach-sign [file] make a detached signature
–list-keys [names] show keys
–fingerprint [names] show fingerprints

Posted in Uncategorized | Leave a comment

Race conditions that ‘talk too’ (TOCTTOU) much

Race conditions TOCTTOU much, but there is plenty that can be done between the ‘time of check to time of use’. During this crack in time, malicious users are using race conditions that have been exploited as security vulnerabilities in systems for almost 4 decades. Historically TOCTTOU attacks have been used to take advantage of a weak filesystem API and limited security controls to OPEN, RENAME, CHANGE OWNER (chown), CHANGE MODE from READ ONLY to READ/WRITE (chmod). Applications such as vi, gedit, rpm, emacs, gdm, and many other GNU programs have been exploited to take advantage of TOCTTOU weaknesses. And an even longer list of 700+ symlink related vulnerabilities are published in the NIST CVE CCE Vulnerability Database at http://nvd.nist.gov

In 2008, at the FAST ’08: 6th USENIX Conference on File and Storage Technologies, IBM Researchers, Microsoft, and UC Berkeley collaborated to publish a paper on this topic entitled Portably Solving File TOCTTOU Races with Hardness Amplification at http://www.usenix.org/event/fast08/tech/full_papers/tsafrir/tsafrir.pdf In this paper the authors describe how a TOCTTOU exploit can be used by an attacker to create a fake password file and to rename it to replace the real /etc/passwd file, worse yet how to prolong the duration of the attack window using a filesystem maze attack pattern.

Moving beyond single servers with shared /tmp space on the local filesystem and back into the present day reality of globally distributed and rapidly integrated cloud computing powered applications with elastic databases of information scattered across servers in multiple locations. The mitigation to reduce the risk of a TOCTTOU race condition in your system is to mediate the request and response with a single atomic event that collapses the check (authenticate) operation with the use (open) events.

What more can be done in distributed computing to guard against race conditions?

Build mature systems with atomic transactions as the design principle, and buffer the system with a checkpoint between the user zone and the service zone — aka, the DMZ. The use of a mediation proxy in the DMZ to perform the authentication and authorization checks adds latency indeed, but it improves the quality of the system by adding load balancing, packet and message body introspection, event logging, and possibly a second authorization on the response to verify that the user is authorized to receive the object requested.

! A second authorization check can prevent accidental data leakage; makes it more difficult to create a race condition; can detect if the user is no longer authorized to read the content; or if the user is attempting to access an object without execution, read or write entitlements.

Hence, if you are concerned that your firewall is not doing enough for you, or you are worried that your DMZ has been compromised by the firewall TCP Split Handshake vulnerabilities then consider using an extra layer of mediation between client side applications and server side APIs.

Many see the value of a policy enforcement point behind the firewall and in front of the API service. This reduces the risk of a man-in-the-middle attack, by using a PEP like the SOA Software Unified API Gateway.

 

Posted in iryanbisms | Tagged , , , | Leave a comment

Consensual Trustworthy Computing

Since June of 2010, I’ve enjoyed collaborating with the Trusted Cloud Initiative subgroup of the Cloud Security Alliance.   In the second half of 2010, we have reached a consensus on the language of the audit assessment questions that are aligned to PCI, HIPAA, NIST, and ISO best practices, and which mitigations are expected for various types of cloud topologies including public, private, hybrid, IaaS, PaaS, and SaaS.   We now have a new RACI matrix template to facilitate the service level agreement negotiation process to align on which parties are responsible, accountable, consulted and informed.  When multiple parties are involved in keeping a system current and operational, the RACI matrix is a very effective method to ensure that the proper resources are allocated to manage the ITIL service support and service delivery procedures.  It is in the proper separation of duties that technology mitigations become more trustworthy, especially related to the governance over production change management procedures to mitigate the risk of an unauthorized change using a human quality control process to verify and validate changes prior to deployment.

The CSA TCI reference architecture team is focused on the ISO and NIST requirements in the CSA Security Controls matrix to ensure that our solution is focused on the mitigations that are most important to the marketplace.  This requirements based approach towards a trustworthy cloud is in alignment with the guiding principles of the TCI mission.

  1. Define protections that enable trust in the cloud.
  2. Develop cross-platform capabilities and patterns for proprietary and open-source providers.
  3. Will facilitate trusted and efficient access, administration and resiliency to the customer/consumer.
  4. Provide direction to secure information that is protected by regulations.
  5. The Architecture must facilitate proper and efficient identification, authentication, authorization, administration and auditability.
  6. Centralize security policy, maintenance operation and oversight functions.
  7. Access to information must be secure yet still easy to obtain.
  8. Delegate or Federate access control where appropriate.
  9. Must be easy to adopt and consume, supporting the design of security patterns
  10. The Architecture must be elastic, flexible and resilient supporting multi-tenant, multi-landlord platforms
  11. The architecture must address and support multiple levels of protection, including network, operating system, and application security needs.

Regarding the implementation of the reference architecture, the team agreed in October that the SAML 2.0 HTTP POST binding is the most appropriate for doing business with a public Internet connected cloud Policy Enforcement Point (PEP) and a legacy Identity Provider Policy Decision Point (PDP) and/or legacy a LDAP directory service.  With the SAML HTTP POST binding type the user’s web browser sends the SAML token to the cloud, rather than integrating the Cloud API Service Provider directly with the enterprise Identity Provider (aka SAML Artifact bindings).   To ensure that architectural decisions such as these are made properly, we’ve adopted the ISO IEC 9126 Decision-Criteria based process for software engineering product quality control.   Technology choices are made based on the solution attributes score for the following criteria:  Functionality, Reliability, Usability, Efficiency, Maintainability, and Portability.   It is this decision making process that drove the team to conclude that SAML tokens are more Functional and Reliable for enterprise application integration with cloud services for business purposes than the use of a technology such as OpenID which is better suited for cloud to cloud application single sign-on.

I would also like to take a moment to acknowledge some key contributors, who have participated and presented various topics.  Marlin Pohlman’s contribution to represent the Cloud Audit Standards with an alignment of the Security Controls Matrix to the expected requirements of various reference architectures has resulted in an improved set of audit questions and a solid set of requirements for the Trustworthy Cloud Initiative that are directly aligned with the certification subgroup work led by Nico Popp.  David Sherr and Jairo Orea have contributed a significant amount of time and energy to align the TCI work with the wants and needs of the Financial Services & Insurance Industries.  Dr. Shehab presented work that he and his students have done at UNCC to speedup the processing of XACML rules by reordering and categorizing policies.  Subra Kumaraswamy has contributed a significant amount of energy on Identity and Access Management requirements and technologies working with Scott Matsumoto and the implementation subgroup.

With the Cloud Security Alliance Congress this week in Orlando, the security and privacy minds of the world are debating the risks and benefits to move away from Basic Authentication along to multi-factor authentication techniques to combine what you have (a smartcard), with what you know (a shared secret), and who you are (biometric data), versus emerging open authentication protocols such as OAuth and the emerging OAuth2 specification.

For more information about the Trusted Cloud Initiative or to join go to http://www.cloudsecurityalliance.org/trustedcloud.html and once you are a member you will be able to contribute to the CSA TCI community knowledge repository.

Posted in Uncategorized | 1 Comment

Communication with Integrity and Availability

Open honest communication is the root of trust creation and peace of mind. Information technology is our universal communication tool, we use it to reach wider audiences faster and to translate our message into multiple languages. The more we talk — the less we fear, and the less we use force.

The silent enemy that will not show himself we fear the most. Like Keyser Soze in the movie The Usual Suspects, hackers pretend to be good and hide like cowards in attempt to obtain stealth like access by stealing the identity of an authorized user. And the majority of successful ‘black and green hat’ hackers are using valid credentials to break in to avoid detection.

When enemies are sentient and can minimally listen we still have the opportunity to warn them that unauthorized actions will be dealt with using force if necessary. This is why the Internet is a tool for peace, because it is an effective tool to inform the world about everyone’s perspective and hence we all chat and use social networking sites to get to know each other better. But it only works if we can prevent hackers from using buffer overflow attacks to get root access to make unauthorized changes and we must prevent bot nets from impacting the availability of a site to ensure that the information is available to the target audience.

Even the most difficult situations can be resolved with communication. It is the act of communication that prevents war, because impossible force is unethical without all of the facts. And when both parties are still talking, the future is uncertain and the use of force should be postponed.

In the movie Avatar, war on Pandora was postponed, and a diplomatic process of communication was attempted to obtain access to unobtainium without the use of force. It was only when both sides stopped talking that war began.

Information technology and free speech is the current diplomatic process, and it is the responsibility of the speaker to communicate with responsibility honesty and integrity in order to deliver a message that will be received and that SHOULD be believed.

That is the trick that Roger Kint used in the Usual Suspects to avoid being caught by the detectives, he simply lied. And the poetry in this story is that Keyser Soze’s nickname was “Verbal”. And hence do not trust the flatterer who hides behind compliments and the smooth talker who only says what you want to hear. A trustworthy communicator delivers bad news with integrity and authenticity.

This is premise of Security and Privacy on the Internet. Some websites are trustworthy and others are created from sources with no integrity. Our responsibility as creators of trustworthy sources of information on the Internet is to detect and to prevent good information from being altered by malicious users. And our responsibility as listeners and readers is to verify that the source of the Information is Authentic.

Hence, security on the Internet is not primarily about confidentiality, IT is about Integrity and Availability of Information using technology that is managed by trustworthy people who follow an approved and mature quality control change management process such as ISO and the emerging NIST special publication 800-128 on secure change management.

After all the I in IT stands for Information, and the Technology is the tools to communicate peacefully in our universe — videos, music, blogs, websites, newspapers, coffee shops, SMS txt messages and e-mail.

Happy Communicating!

Posted in Uncategorized | Tagged | 1 Comment

Trusting The Value of Time Now & The Uncertainty of Simultaneous Events

Some may now conclude that the theory of relativity has recently been proved with the discovery that the speed that an aluminum ion “ticks” is faster at higher elevations and is slower at lower elevations. Researchers in Boulder, Colorado have shown that a delta in elevation as small as one foot has an impact on the “speed of time.”

Most computers on the planet are synchronizing their time across the network with an Atomic Clock NTP service. Which works fine for course grained precision, however Einstein’s work on synchronization is worth reading up on if we are to solution for pico second accuracy in a distributed infrastructure.

http://en.wikipedia.org/wiki/Einstein_synchronisation

Time servers are distributed globally — all by the way must be impacted by the force of gravity at varying elevations. The question is can we really trust the value of time now on a nanoscale? And without a world of a quantum computers integrated with synchronous data-teleportation, how can we?

Even with a daily NTP time synchronization message being sent to every server on the planet (assuming all of the packets actually were processed and the time change was executed simultaneously), that the time of each CPU clock in a world of globally distributed computers will still result in inconsistent time due to the issue of “Time Dilation”.

And simply increasing the frequency of the NTP time synchronization events to every 1 second is not workable since that would put too much load on all systems. Besides, NIST only allows clients to connect once every 4 seconds, and anything faster than that will be viewed as a potential denial of service technique. And since an enterprise doesn’t want to integrate every server with an external NTP server there is also latency to update the clock of privately hosted time servers.

From a security technology perspective, I find this matter to be quite interesting as it relates to when a globally distributed system-session should expire. This challenge of using time since last trusted event as a factor in an authentication and authorization event is even greater when dealing with transaction times that are expected to execute with nanosecond latencies and millisecond timeouts.

When connecting to a system that has only one front door, the matter of expiring a user session is relatively simple, and it typically as long as 20 minutes after the last authorized event or transaction execution. But in a system that has multiple computers integrated globally to execute transactions, getting a list of every event that has executed in that collective system in the past 200 nanoseconds is a very difficult question to answer as we approach the paradigm of the quantum computing and nanoevents.

When searching the log files of 200 globally distributed computers for the past 200 nanoseconds of events executed starting from point in time NOW, there is a good probability that the list of events reported back from the system would vary from computer to computer, even if all of the events were executed locally and simultaneously ruling out network latency.

The reason for this is due to the fact that the present value of NOW will vary from location to location due to the time dilation effect of gravity on matter. One may even deduce that since time moves faster at higher elevations, that more events will execute in the past 200 nanoseconds on server located in position RU42 — relatively perceived that is.

Posted in Uncategorized | 1 Comment