(Part One – Series 1 of 5) What is Malicious Software and How Can it be Overcame? A Serie of Five to Take you Through
As a team at Techaai, we decided to dig deep into malicious software, there causes, impact on our systems and how they can be overcame. After writing about the 5 tips to protect your website from malicious attacks, we received a lot more on inquires on how one may protect themselves from such malicious software behaviors.
In this is 5 series article, we bring enlighten ore about, 1.secure programs and fixing faults. 2. Non malicious program errors, 3.Viruses, 4. Malicious software and 5.Methods of control
1. Secure Programs
Consider when you say that a program is “secure”, it means that security implies some degree of trust that the program enforces expected confidentiality, integrity, and availability. From the point of view of a program or a programmer, how can we look at a software component or code fragment and assess its security? This question is, of course, similar to the problem of assessing software quality in general.
One way to assess security or quality is to ask people to name the characteristics of software that contribute to its overall security. However, we are likely to get different answers from different people. This difference occurs because the importance of the characteristics depends on who is analyzing the software. For example, one person may decide that code is secure because it takes too long to break through its security controls. And someone else may decide code is secure if it has run for a period of time with no apparent failures. But a third person may decide that any potential fault in meeting security requirements makes code insecure.
An assessment of security can also be influenced by someone’s general perspective on software quality. For example, if your manager’s idea of quality is conformance to specifications, then she might consider the code secure if it meets security requirements, whether or not the requirements are complete or correct.
This security view played a role when a major computer manufacturer delivered all its machines with keyed locks, since a keyed lock was written in the requirements. But the machines were not secure, because all locks were configured to use the same key. Thus, another view of security is fitness for purpose; in this view, the manufacturer clearly had room for improvement
One approach to judge quality in security is fixing faults. You might argue that a module in which 100 faults were discovered and fixed is better than another in which only 20 faults were discovered and fixed, suggesting that more rigorous analysis and testing had led to the finding of the larger number of faults. Early work in computer security was based on the paradigm of “penetrate and patch,” in which analysts searched for and repaired faults.
Often, a top-quality “tiger team” would be convened to test a system’s security by attempting to cause it to fail. The test was considered to be a “proof” of security; if the system withstood the attacks, it was considered secure. Unfortunately, far too often the proof became a counterexample, in which not just one but several serious security problems were uncovered.
The problem discovery in turn led to a rapid effort to “patch” the system to repair or restore the security. However, the patch efforts were largely useless, making the system less secure rather than more secure because they frequently introduced new faults. There are three reasons why. The fault often had non-obvious side effects in places other than the immediate area of the fault. The system functionality or performance would be affected if faults needs to be detected properly.
The pressure to repair a specific problem encouraged a narrow focus on the fault itself and not on its context. In particular, the analysts paid attention to the immediate cause of the failure and not to the underlying design or requirements faults.