Outsourcing infrastructure to cloud service providers has fundamentally changed the face of information technology and corporate architectures in the last decade or so.
Flexible, fast-paced development. Rapid deployment. Scalability. Resilience. Minimisation of in-house hosted infrastructure. The growth of microservices and mobile technologies.
These drivers lead to a new set of challenges in terms of the risk to information stored, processed and managed within the cloud. In turn, the evolution of technology presents assurance providers with challenges in terms of testing approaches and methodologies.
We're all familiar with IaaS, PaaS and SaaS cloud-based offerings. Most hosting providers, including Amazon AWS, Google and Microsoft Azure have well-defined policies and procedures to follow to gain authorisation to conduct a penetration test against dedicated hosts or environments you are using within their architectures; testing without such authorisation in place from the hosting provider could well be illegal depending on the jurisdiction in which you are operating, and no ethical tester wants to be on the wrong side of the law. The key point for many cloud hosting providers is that the processes to gain authorisation for testing apply specifically to your dedicated cloud services.
From Amazon, for example: "Our policy only permits testing of EC2 and RDS instances that you own. Tests against any other AWS services or AWS-owned resources are prohibited"
This presents a pressing problem for people who are looking to adopt the benefits of the newest logical step in the cloud computing revolution: Serverless architectures. When adopting a serverless architecture you effectively abstract away all management of the underlying infrastructure from your solutions entirely and rely heavily upon third-party provided backends to make things fit together - BaaS (Backend as a Service) and FaaS (Function as a Service) are terms which you will often hear surrounding such serverless architectures.
Back in 2014, Amazon announced AWS Lambda, providing serverless function execution with automated scaling and charges on a computing basis only. When you are using a FaaS service, such as Lambda, the functions you deploy into the service will likely be executing on the same server which is also executing functions for other tenants of the cloud service provider. With Lambda, this means that if you want to conduct a penetration test of your code running on the Lambda service, Amazon AWS does not have a well-defined automatic process for giving you permission to conduct this testing.
Why not? Well, for good reasons. The first question is obviously "what is a penetration test?". The definition of penetration testing may vary from provider to provider - in some cases, a penetration test may involve aggressive and load-intensive scanning exercises targeted at the underlying infrastructure components such as load balancers and supporting web servers. If we use Lambda services as an example, the resource consumption from these types of test could increase Amazon's costs in delivering the service to their customers and in extreme cases could even cause degredation of the quality of service which is provided to other tenants of the facility. Generally, the restrictions are in place because, quite rightly, the hosting provider is looking to protect the interests of all of their customers.
We recently encountered this challenge when looking into gaining authority to conduct a penetration test for a customer of ours who has adopted a fully serverless architecture hosted on Amazon AWS. Our customer was happy to accept the security of the operating platform provided by AWS Lambda, but wanted to have technical assurance provided by us in order to determine whether the functions developed for deployment onto Lambda, and accessed through Amazon's API gateway, were functioning as expected, and validate that the controls in place to prevent unauthorised access between users were properly-implemented.
When the customer first asked AWS for authority to conduct a penetration test, the original answer from AWS was 'No', based on the policy above. Did this mean we could not provide any technical assurance to our customer? Did this mean that the customer would have to run up a dedicated EC2 instance, configure the host and deploy their functions onto that EC2 instance for testing? No! We just had to adopt a flexible and open approach to our discussions with AWS so that we could demonstrate that we would meet AWS's requirements for acceptable use of the servies.
We entered into discussions with AWS together with our customer.
- We explained our customer's concerns and their wish to assurance surrounding their own functions as deployed to Lambda.
- We detailed the test cases that we would like to address, and the way in which we would conduct them: focusing on the manipulation of the JSON payloads provided to the functions via the service, reviewing the business logic put together in the functions developed by our customer and manually testing authorisation controls in place between users.
- We talked about traffic volumes and provided AWS with assurances that our traffic volumes to the service would be inimal during the engagement.
- We clarified that there would be no targeting of the supporting infrastructure or web technologies supporting the service.
- We provided AWS with emergency contacts in case there were any functional issues observed, so that we could stop testing immediately.
It took us a little while to give AWS the comfort that they needed, but by engaging in a productive dialogue we gained the authority to conduct the testing from AWS in order to answer the questions our customer needed answering. This is just one of many examples of the way the information security and technical assurance industry is having to mature and adapt to changes in the way technologies are being adopted and used. Taking a pro-active and collaborative approach, being open and willing to enter into detailed discussions about how testing will be conducted, looking for the solutions to the problems thrown up by the adoption of new technologies: these things are vitally important.
TL;DR: Don't discount the ability to gain authority to conduct technical assurance testing in shared cloud environments and serverless architectures when the first answer from the hosting provider is 'No'. Engaging in a productive dialogue with the hosting provider will often allow you to gain permission to work within reasonable parameters and still meet your technical assurance requirements.