i0t : internet zero trust

While most think the gist of IoT is about the Internet of Things, those of us following the recent events (heartbleeding shellshocked poodles) in the security space know that NIST is spot on with their recommendation to implement a “Zero Trust Architecture.”

If you’ve configured an Arduino, by now you know that it doesn’t take much to get the WiFi SSID and password from one of these little things, especially if it is equipped with a USB port to connect directly to it with a laptop — which most have as that is both the port used for power and for initial setup to configure it to connect to a network.

Having said that, it begs the question if an Arduino even has the computational power and memory required for adequate encryption of any data that the “thing” is sensing once it is on the network. And if data privacy is not a concern for your use case, then losing control of the things connected to it should minimally be a concern. The simplicity of IoT development is attractive to many developers due to the low cost to enter a very compelling market of wirelessly controlling anything with a switch. However simplicity and security are certainly orthogonal concepts.

While it may seem convenient to be able to turn on your air conditioner as you depart a plane so that your home is comfortable before you pull into the driveway, it would not be convenient to find out that somehow the “thing” was hijacked and had instead cranked up the heat while you were away. These are the types of things that should concern the consumer that is so enamored with a winking hub endorsed by a nesting actor obsessed with the fortune of perfectly dimmed lighting.

What is a Zero Trust Architecture? Start here: i0t

Now that we all understand the value of segmentation gateways at the API layer that offer value beyond simply opening and closing ports like a traditional network firewall, we can discuss enclaves of domains of trust and the ability to centrally manage policies across these zones of control.

I’ll be at Cloud Expo in Santa Clara on November 6, 2014 demonstrating how SOA Software API Gateways can play the role of a segmentation gateway Policy Enforcement Point for IoT API controllers, and how the SOA Policy Manager plays the role of the Policy Administration Point and the Policy Decision Point for each gateway.

Posted in Uncategorized | Leave a comment

Rapid Mobile App to OAuth Secured REST API Integration

Recorded Demo: August 26 2014

Wireframes for user interface design are still a good idea when communicating requirements to others, however lately I would rather create them as a HTML5 prototype that is usable across heterogeneous devices with various screen sizes and resolutions. One such tool that I prefer is from Appery.io, which is a browser based jQuery and PhoneGap powered mobile application development studio. Appery.io offers a faster path to publish an application in an App Store because once the HTML5 prototyping, user acceptance testing and usability work is complete, it can also generate the Android apk and IOS binaries without the need to port to Objective C manually. This feature alone is a major time saver, and so is the drag and drop JSON API response test message to UI component designer that creates a wireframe that will also generate the runtime code.

Let’s say you have your APIs ready to consume and you know what data elements need to be displayed to the user. Perhaps some of the APIs are SOAP/XML and others are REST/JSON and they are secured with different authentication protocols.

To simplify the user interface design process, let’s agree to transform the SOAP services to REST APIs using an API Gateway to also transform the XML to/from JSON. Then configure the API gateway to mediate the authentication on the client app side from OpenID & OAuth to the various different security protocols required for the downstream APIs being consumed. This avoids the need to require different credentials for each API in the client application code, and improves the security of the system by limiting the connections to the APIs to only the API gateways, greatly reducing the attack vector and lowering the risk of system outages and malformed requests that reach the application/data tier.

Now that security and API transformation is in place, we are ready to begin mobile application client development. First the developer will request an application ID and secret from the API developer portal that is linked to the API gateway cluster’s policy decision point. The app id will be used in an Authorization header by the client app in order to begin the OAuth process to authenticate the user and receive a token, thus avoiding the need to send user credentials with each API call and simplifying the authorization process to restrict which API operations can be used based on the “scope” of the token. The user will only be prompted to authenticate when the token has expired and there is no valid refresh token.

When creating the HTML5 client application using Appery.io the process is to first configure a GET API request for an authorization code from the OAuth server then to submit a POST request for the access and refresh tokens. These values will be parsed and stored into variables that can later be accessed using JavaScript to insert the tokens into the Authorization header for each subsequent API call that the application will make.

Next add the APIs that will be used by the application to retrieve the data requested by the user of the mobile app and save the response of each test call. After placing the fields on the canvas that will display the data returned, the data tab of the UI designer is used to visually wire (drag and link) the field in the JSON response payload to the field on the user interface that will display that string of text. Once all of the input and output fields are mapped, test the app to view the HTML5 version in your browser using the QR code that links to the app sandbox URL. In under an hour you’ve now got alpha wireframes and a secured prototype that is ready to share.

Experiment with portrait vs landscape given that not all users share the same perspective and adjust for inconsistencies across device type. Begin the UAT process and iterate iterate iterate.

Appery.io and SOA Software are partnering because SOA makes it easy to configure the API security properly and the SOA policy manager and API gateway make it easy to debug the API requests, and the Community Manager API portal is a turnkey solution that will generate client application IDs that are linked to SLA Quality of Service policies with reports to track how the mobile apps are consuming the various APIs.

Posted in Uncategorized | Leave a comment

Token Strengths & Mitigation Best Practices

An OAuth MAC token is similar to a WSSE token in that they both use a nonce to mitigate replay attacks, however the use of the 3-legged OAuth protocol to authenticate the user creates a smaller attack vector than a digest password in every WSSE token, thereby protecting the user’s credentials as well as making it easier to perform network based operational entitlement authorization at the API gateway layer with the use of scopes and licenses.

The scope of a token is similar in concept to SAML attributes which are commonly used for group memberships associated to the user subject.

I still talk to many enterprises using WSSE tokens with WS-Addressing which further restricts what the receiving API target host will accept and if asynchronous specify the reply to, which is similar to how hypermedia URL rewriting is being used in the response body of a REST API.

Below is from the OASIS WSSE spec on how to mitigate risks between the API gateway and the physical SOAP service:

“The use of the WSSE UsernameToken introduces no new threats beyond those already identified for other types of SecurityTokens. Replay attacks can be addressed by using message timestamps, nonces, and caching, as well as other application-specific tracking mechanisms. Token ownership is verified by use of keys and man-in-the-middle attacks are generally mitigated.

Transport-level security may be used to provide confidentiality and integrity of both the Username token and the entire message body.”

Posted in Uncategorized | Leave a comment

Public Cloud API Best Practices

10 minutes of slides

15 minutes of demo

 

Link | Posted on by | Tagged , , , , , , , , , | Leave a comment

Keep IT Private

To do so, use a crypto stack like Gnu Privacy Guard (gpg).

0. Get gpg from http://www.gnupg.org/download/index.en.html

1. Create your Gnu Privacy Guard signed crypto key.

  • gpg –key-gen
  • Note: new in version 2.x:  executable is now: gpg2 and the –key-gen param is now –gen-key

2. To decrypt a file encrypted with the .gpg extension

  • gpg -d FileToDecrypt.tar.gpg >>decrypted.tar

3. To send an encrypted file to yourself generate a public key for your email address.

To generate your own public key.

  • ssh-keygen -t rsa -C “me@domain.com”

4. To encrypt a file using gpg you need the public key of the recipient.

Ask anyone who wishes to collaborate privately with you for their public key. No you never need their private key, nor should you ever send anyone your private key nor secret passphrase.

  • gpg -e -r foryou@domain.com file.xyz

Note that the public key of the recipient needs to exist in the PATH.

For more help:

How To Guide: http://www.dewinter.com/gnupg_howto/english/GPGMiniHowto.html

Cheat Sheet of Commands: http://irtfweb.ifa.hawaii.edu/~lockhart/gpg/gpg-cs.html

Command Line parameters for gpg2

-s, –sign make a signature
–clearsign make a clear text signature
-b, –detach-sign make a detached signature
-e, –encrypt encrypt data
-c, –symmetric encryption only with symmetric cipher
-d, –decrypt decrypt data (default)
–verify verify a signature
-k, –list-keys list keys
–list-sigs list keys and signatures
–check-sigs list and check key signatures
–fingerprint list keys and fingerprints
-K, –list-secret-keys list secret keys
–gen-key generate a new key pair
–gen-revoke generate a revocation certificate
–delete-keys remove keys from the public keyring
–delete-secret-keys remove keys from the secret keyring
–sign-key sign a key
–lsign-key sign a key locally
–edit-key sign or edit a key
–passwd change a passphrase
–export export keys
–send-keys export keys to a key server
–recv-keys import keys from a key server
–search-keys search for keys on a key server
–refresh-keys update all keys from a keyserver
–import import/merge keys
–card-status print the card status
–card-edit change data on a card
–change-pin change a card’s PIN
–update-trustdb update the trust database
–print-md print message digests
–server run in server mode

Options:

-a, –armor create ascii armored output
-r, –recipient USER-ID encrypt for USER-ID
-u, –local-user USER-ID use USER-ID to sign or decrypt
-z N set compress level to N (0 disables)
–textmode use canonical text mode
-o, –output FILE write output to FILE
-v, –verbose verbose
-n, –dry-run do not make any changes
-i, –interactive prompt before overwriting
–openpgp use strict OpenPGP behavior

(See the man page for a complete listing of all commands and options)

Examples:

-se -r Bob [file] sign and encrypt for user Bob
–clearsign [file] make a clear text signature
–detach-sign [file] make a detached signature
–list-keys [names] show keys
–fingerprint [names] show fingerprints

Posted in Uncategorized | Leave a comment

Race conditions that ‘talk too’ (TOCTTOU) much

Race conditions TOCTTOU much, but there is plenty that can be done between the ‘time of check to time of use’. During this crack in time, malicious users are using race conditions that have been exploited as security vulnerabilities in systems for almost 4 decades. Historically TOCTTOU attacks have been used to take advantage of a weak filesystem API and limited security controls to OPEN, RENAME, CHANGE OWNER (chown), CHANGE MODE from READ ONLY to READ/WRITE (chmod). Applications such as vi, gedit, rpm, emacs, gdm, and many other GNU programs have been exploited to take advantage of TOCTTOU weaknesses. And an even longer list of 700+ symlink related vulnerabilities are published in the NIST CVE CCE Vulnerability Database at http://nvd.nist.gov

In 2008, at the FAST ’08: 6th USENIX Conference on File and Storage Technologies, IBM Researchers, Microsoft, and UC Berkeley collaborated to publish a paper on this topic entitled Portably Solving File TOCTTOU Races with Hardness Amplification at http://www.usenix.org/event/fast08/tech/full_papers/tsafrir/tsafrir.pdf In this paper the authors describe how a TOCTTOU exploit can be used by an attacker to create a fake password file and to rename it to replace the real /etc/passwd file, worse yet how to prolong the duration of the attack window using a filesystem maze attack pattern.

Moving beyond single servers with shared /tmp space on the local filesystem and back into the present day reality of globally distributed and rapidly integrated cloud computing powered applications with elastic databases of information scattered across servers in multiple locations. The mitigation to reduce the risk of a TOCTTOU race condition in your system is to mediate the request and response with a single atomic event that collapses the check (authenticate) operation with the use (open) events.

What more can be done in distributed computing to guard against race conditions?

Build mature systems with atomic transactions as the design principle, and buffer the system with a checkpoint between the user zone and the service zone — aka, the DMZ. The use of a mediation proxy in the DMZ to perform the authentication and authorization checks adds latency indeed, but it improves the quality of the system by adding load balancing, packet and message body introspection, event logging, and possibly a second authorization on the response to verify that the user is authorized to receive the object requested.

! A second authorization check can prevent accidental data leakage; makes it more difficult to create a race condition; can detect if the user is no longer authorized to read the content; or if the user is attempting to access an object without execution, read or write entitlements.

Hence, if you are concerned that your firewall is not doing enough for you, or you are worried that your DMZ has been compromised by the firewall TCP Split Handshake vulnerabilities then consider using an extra layer of mediation between client side applications and server side APIs.

Many see the value of a policy enforcement point behind the firewall and in front of the API service. This reduces the risk of a man-in-the-middle attack, by using a PEP like the SOA Software Unified API Gateway.

 

Posted in iryanbisms | Tagged , , , | Leave a comment

Consensual Trustworthy Computing

Since June of 2010, I’ve enjoyed collaborating with the Trusted Cloud Initiative subgroup of the Cloud Security Alliance.   In the second half of 2010, we have reached a consensus on the language of the audit assessment questions that are aligned to PCI, HIPAA, NIST, and ISO best practices, and which mitigations are expected for various types of cloud topologies including public, private, hybrid, IaaS, PaaS, and SaaS.   We now have a new RACI matrix template to facilitate the service level agreement negotiation process to align on which parties are responsible, accountable, consulted and informed.  When multiple parties are involved in keeping a system current and operational, the RACI matrix is a very effective method to ensure that the proper resources are allocated to manage the ITIL service support and service delivery procedures.  It is in the proper separation of duties that technology mitigations become more trustworthy, especially related to the governance over production change management procedures to mitigate the risk of an unauthorized change using a human quality control process to verify and validate changes prior to deployment.

The CSA TCI reference architecture team is focused on the ISO and NIST requirements in the CSA Security Controls matrix to ensure that our solution is focused on the mitigations that are most important to the marketplace.  This requirements based approach towards a trustworthy cloud is in alignment with the guiding principles of the TCI mission.

  1. Define protections that enable trust in the cloud.
  2. Develop cross-platform capabilities and patterns for proprietary and open-source providers.
  3. Will facilitate trusted and efficient access, administration and resiliency to the customer/consumer.
  4. Provide direction to secure information that is protected by regulations.
  5. The Architecture must facilitate proper and efficient identification, authentication, authorization, administration and auditability.
  6. Centralize security policy, maintenance operation and oversight functions.
  7. Access to information must be secure yet still easy to obtain.
  8. Delegate or Federate access control where appropriate.
  9. Must be easy to adopt and consume, supporting the design of security patterns
  10. The Architecture must be elastic, flexible and resilient supporting multi-tenant, multi-landlord platforms
  11. The architecture must address and support multiple levels of protection, including network, operating system, and application security needs.

Regarding the implementation of the reference architecture, the team agreed in October that the SAML 2.0 HTTP POST binding is the most appropriate for doing business with a public Internet connected cloud Policy Enforcement Point (PEP) and a legacy Identity Provider Policy Decision Point (PDP) and/or legacy a LDAP directory service.  With the SAML HTTP POST binding type the user’s web browser sends the SAML token to the cloud, rather than integrating the Cloud API Service Provider directly with the enterprise Identity Provider (aka SAML Artifact bindings).   To ensure that architectural decisions such as these are made properly, we’ve adopted the ISO IEC 9126 Decision-Criteria based process for software engineering product quality control.   Technology choices are made based on the solution attributes score for the following criteria:  Functionality, Reliability, Usability, Efficiency, Maintainability, and Portability.   It is this decision making process that drove the team to conclude that SAML tokens are more Functional and Reliable for enterprise application integration with cloud services for business purposes than the use of a technology such as OpenID which is better suited for cloud to cloud application single sign-on.

I would also like to take a moment to acknowledge some key contributors, who have participated and presented various topics.  Marlin Pohlman’s contribution to represent the Cloud Audit Standards with an alignment of the Security Controls Matrix to the expected requirements of various reference architectures has resulted in an improved set of audit questions and a solid set of requirements for the Trustworthy Cloud Initiative that are directly aligned with the certification subgroup work led by Nico Popp.  David Sherr and Jairo Orea have contributed a significant amount of time and energy to align the TCI work with the wants and needs of the Financial Services & Insurance Industries.  Dr. Shehab presented work that he and his students have done at UNCC to speedup the processing of XACML rules by reordering and categorizing policies.  Subra Kumaraswamy has contributed a significant amount of energy on Identity and Access Management requirements and technologies working with Scott Matsumoto and the implementation subgroup.

With the Cloud Security Alliance Congress this week in Orlando, the security and privacy minds of the world are debating the risks and benefits to move away from Basic Authentication along to multi-factor authentication techniques to combine what you have (a smartcard), with what you know (a shared secret), and who you are (biometric data), versus emerging open authentication protocols such as OAuth and the emerging OAuth2 specification.

For more information about the Trusted Cloud Initiative or to join go to http://www.cloudsecurityalliance.org/trustedcloud.html and once you are a member you will be able to contribute to the CSA TCI community knowledge repository.

Posted in Uncategorized | 1 Comment