Wednesday, May 22, 2013

Testing, in the black box (ATV), Security & Privacy



How Automate Testing Vehicles (ATV) should include Pentesting.

Why should privacy officers get involved in development, regression testing process?

Why does IT need to improve their testing strategies?

Pitfalls in Testing, Security/Privacy concerns is what drives people to have nightmares. Privacy officers need to have a better understanding of the environment they work in. The IT people need to embrace the notion that Privacy/Security starts from the beginning. So in that way the chances of being on a front page of a newspaper because of a breach and/or a failure will be minimized. NO ONE wants to phone the CIO about a problem like this. It is a team effort.

I do have to warn you, the reader, that some of the material may be a  little IT oriented. But in an organization where one needs to satisfy a number of different objectives, I would suggest at least a basic knowledge of the IT process is needed. And that the IT personnel need to understand the present compliance/regulator landscape.

Some definitions are warranted before I begin.

ATV or Automated Testing Vehicle. What is it? Why do I care? And is it a 'best practice'? (one of the most over used phrase at present).

The idea is fairly simple. Having a set of scripts (automated) that can be run to test the system in question. The objective is to test the system before any changes are implemented. The process should set up  the files that will be used for testing(see one of my previous blog posts concerning using data for testing),  then run the test scripts, and afterwards run the comparison reports and highlight items of concern from the test just executed. All this is done in an automated fashion. Rather simple concept, but one that can be 'processes' changing in a good way.

Well there is more to this. But let me define another term or two first.

IT systems that are down cost money in lost revenue, and good will to the enterprise.  As an example, in 2012 Google had an outage.
Google June 2012 down for 10 min.

The ball park figure cost that Google suffered was calculated at about $750,000. And that was for 10 minutes.  Now I am not suggesting all downtime costs are that much. It depends on the circumstances, but I am sure no one would like to find out for their own companies.

Another good example of the costs is sited at costs of web down time per industry

This site allows you to calculate the cost of a web site being down per industry/application. Its an eye opener to say the least.

In another 'word', downtime is BAD/EXPENSIVE *Yea  I know that is two words*. But joking aside we need to reduce unavailability as much as possible.

PenTesting. Wikipedia link  The Information Systems Audit and Control Association(ISACA) defines Penetration Testing as  "A test of the effectiveness of security defences through mimicking the actions of real-life attackers."

(For the reader who is more concerned with Privacy/Security, please read on)

So now let's proceed. When an application change happens IT personnel (or a designated organization) tests the changes (IE regression testing). They test the change to see if it works. Now depending on the process that is followed, a user may also test/approve the same series of changes to the application for user approval. Fine, right? Do you notice something missing in the above? In fact, there is more then one item here that needs to be defined/explored.

For many organizations testing to maintain the basic functions within an application does happen in a haphazardly way.  Sure the change is tested and to get to the enhancements, some basic functions are tested as well, But, based on my anecdotal experiences, on many occasions, the entire core functions of the changed application are not testing on a consistent bases.  A test of the all the basic core functions should also be completely tested whenever there is a change.

As an example, if the application in question is some public facing web application (a web store as an example), basic function testing should also be done. Test for example, the ability to add/change a Credit card information and make sure that the update still works. Test adding an item to the shopping cart etc.

So if the new function within the application fails, you have verified that the basic core functions, the one you need to keep the doors open, will still operate.

Imagine if an error occurs at your bank, yet the basic functions were tested successfully with the 'improved mobile bank portal' (the change that will be implemented).  Then logic would dictate that the basic functions should still work (you can still pay bills) even if the enhancement of the bank's mobile app does not. Corrections can be retested and implemented with minimal cost/embarrassment to the organization.

I am therefore advocating that there should be standard testing scripts that confirm, even with the changes that are going to be implemented,  that ALL the core functions still are accessible.

So to implement a process like this, you first need to map out the basic functions that you can not live without. Once that is done and scripts are created, an automated process should be created. When ready, a series of script can be executed with little human intervention. (less change for human error). The 'Best Practice' (there is that phase again) would be something along the lines of submitting the scripts and going home. When you get into the office the following day the results are ready for analysis/correction etc.

This should ensure that at even if the new change fails. You, the customer, can still do business with the organization in question. This is what some people call a ATV (see above). This process can be called your insurance policy.

However, lets' takes this further. Why just test  the basic functionality of the application? Should we also test for Security/Privacy issues?  Should the company's Privacy/Security office ensure that this type of testing, verification is also included within an ATV and executed whenever anything changes?

Absolutely!

A process that includes PenTesting (see above) is something one should consider adding to the above mentioned ATV. With any change there is always a chance that a vulnerability is created that may not have been there before.

Any failure can by it's very nature, cause the potential to expose sensitive information. It can be business secrets, and/or Personnel Identifiable Information (PII) to name but two potential headaches.

There is software in the marketplace that has the capability to engage/test/analyze applications for vulnerabilities. Some of the software I have previously mentioned as well as others which are available with the capabilities needed.

So I suggest that one creates an ATV process that includes the basic functionality of the application/system in question as well as additional testing for security/privacy. All  this should be automated so that more extensive testing can be executed as well as reducing the chance for human error.

Privacy officers need to ensure that any changes that are implemented will not cause exposure that may be costly. IT people need to make sure that the basic systems functions still run, no matter what is changed.

Finally, while no one can claim in absolute terms that there will be no issues, following these basic concepts can help reduce the chance that the CIO needs to be called because of an issue.











2 comments:

  1. Summary: More testing at more cost. If it's not my money, I'm not against it. But the article avoids the main problem in IT, and in our culture.

    "Nobody knew" is the reply when a problem occurs. That is blatantly false. In my 30 yrs in IT and years in insurance investigation prior to IT, most problems, whether security or other, were correctly identified before the problem occurred. The decision makers decided to "accept the risk". In most cases when the security person is not the one to predict the problem, the security person goes along with management and does not support those who express concern about the flaw.

    This is in our culture. It is not unique to IT. Both Bush, Obama and the media say "nobody knew" when the fact is that the problem was accurately predicted.

    Corallary of "Nobody Knew". IT developers know the weaknesses of their system. They protested the bad design or vendor product; but were told to stand down. So they allowed for "work-arounds" knowing full well that they would be needed later. Those work-around features are the security risk.

    Are the work-arounds tested for security?

    ReplyDelete
  2. There is a difference between mitigating risks, and knowingly putting into production issues/problems. They are not the same.

    While there is no perfect solution. (ask any software developer including people who work for NASA, Boeing). There is no such a thing as bug free code. There is no perfect human, and 99.9% of diamonds are, in some fashion imperfect. So to say that most problems are identifiable before hand is some what misleading.

    Sure companies know there will be problems. But many times you don't know what you don't know. SO does the company know beforehand that there will be issues? Sure. But do they know what they are, most likely not.

    We live in an imperfect world. Yes you can test till you are blue in the face but there still maybe issues.

    Any company who knowingly implements security/privacy issues will be in for a BIG shock. The landscape has changed from the 80's where there was no legal requirements for privacy and major companies were not appearing all the time on the front page of the WSJ because of some sort of issue.


    Yes you do have the odd ball company who tries to 'get away with it' ONCE. But not twice.


    As for work arounds..... Not in my experience working in financial institutions (IT) and withing with organizations specifically concerned with security and privacy (+33 years if I want to date myself). Especially in the current environment.



    ReplyDelete