Content deleted Content added
BrainStack (talk | contribs) Link suggestions feature: 3 links added. |
|||
(48 intermediate revisions by 32 users not shown) | |||
Line 1:
{{Short description|Software development methodology}}
{{Use American English|date=November 2020}}
{{howto|date=March 2012}}
'''Defensive programming''' is a form of [[defensive design]] intended to
Defensive programming is an approach to improve software and [[source code]], in terms of:
Line 11 ⟶ 10:
* Making the software behave in a predictable manner despite unexpected inputs or user actions.
Overly defensive programming, however, may safeguard against errors that will never be encountered, thus incurring run-time and maintenance costs
== Secure programming ==
{{main|Secure coding}}
Secure programming is the subset of defensive programming concerned with [[computer security]]. Security is the concern, not necessarily safety or availability (the [[software]] may be allowed to fail in certain ways). As with all kinds of defensive programming, avoiding bugs is a primary objective
<syntaxhighlight lang="c">int risky_programming(char *input) {
char str[1000];
Line 27 ⟶ 26:
// ...
}</syntaxhighlight>
The function will result in undefined behavior when the input is over 1000 characters. Some
<syntaxhighlight lang="c">int secure_programming(char *input) {
char str[1000+1]; // One more for the null character.
Line 35 ⟶ 34:
// Copy input without exceeding the length of the destination.
strncpy(str, input, sizeof(str));
// If strlen(input) >= sizeof(str) then strncpy won't null terminate.
Line 109 ⟶ 108:
If existing code is tested and known to work, reusing it may reduce the chance of bugs being introduced.
However, reusing code is not ''always''
When considering using existing source code, a quick review of the modules(sub-sections such as classes or functions) will help eliminate or make the developer aware of any potential vulnerabilities and ensure it is suitable to use in the project. {{Citation needed|reason=Cannot find source, Was from a video viewed~April 2015|date=November 2021}}
==== Legacy problems ====
Line 116 ⟶ 117:
Legacy problems are problems inherent when old designs are expected to work with today's requirements, especially when the old designs were not developed or tested with those requirements in mind.
Many software products have experienced problems with old legacy source code
* [[Legacy code]] may not have been designed under a defensive programming initiative, and might therefore be of much lower quality than newly designed source code.
* Legacy code may have been written and tested under conditions which no longer apply. The old quality assurance tests may have no validity any more.
** '''Example 1''': legacy code may have been designed for ASCII input but now the input is [[UTF-8]].
** '''Example 2''': legacy code may have been compiled and tested on 32-bit architectures, but when compiled on 64-bit architectures, new arithmetic problems may occur (e.g., invalid signedness tests, invalid type casts, etc.).
** '''Example 3''': legacy code may have been targeted for offline machines, but becomes vulnerable once network connectivity is added.
* Legacy code is not written with new problems in mind. For example, source code written
Notable examples of the legacy problem:
* [[BIND|BIND 9]], presented by Paul Vixie and David Conrad as "BINDv9 is a [[Rewrite (programming)|complete rewrite]]", "Security was a key consideration in design",<ref>{{Cite web|url=http://impressive.net/archives/fogo/20001005080818.O15286@impressive.net|title=fogo archive: Paul Vixie and David Conrad on BINDv9 and Internet Security by Gerald Oskoboiny
* [[Microsoft Windows]] suffered from "the" [[Windows Metafile vulnerability]] and other exploits related to the WMF format. Microsoft Security Response Center describes the WMF-features as ''"Around 1990, WMF support was added... This was a different time in the security landscape... were all completely trusted"'',<ref>{{Cite news|url=http://blogs.technet.com/msrc/archive/2006/01/13/417431.aspx|title=Looking at the WMF issue, how did it get there?|work=MSRC|access-date=2018-10-27|language=en-US|archive-url=https://web.archive.org/web/20060324152626/http://blogs.technet.com/msrc/archive/2006/01/13/417431.aspx|archive-date=2006-03-24|url-status=dead}}</ref> not being developed under the security initiatives at Microsoft.
* [[Oracle Corporation|Oracle]] is combating legacy problems, such as old source code written without addressing concerns of [[SQL injection]] and [[privilege escalation]], resulting in many security vulnerabilities which
=== Canonicalization ===
Line 135 ⟶ 136:
Assume that code constructs that appear to be problem prone (similar to known vulnerabilities, etc.) are bugs and potential security flaws. The basic rule of thumb is: "I'm not aware of all types of [[security exploit]]s. I must protect against those I ''do'' know of and then I must be proactive!".
===Other
* One of the most common problems is unchecked use of constant-size
* Encrypt/authenticate all important data transmitted over networks. Do not attempt to implement your own encryption scheme,
▲* One of the most common problems is unchecked use of constant-size structures and functions for dynamic-size data (the [[buffer overflow]] problem). This is especially common for [[string (computer programming)|string]] data in [[C (programming language)|C]]. C library functions like <tt>gets</tt> should never be used since the maximum size of the input buffer is not passed as an argument. C library functions like <tt>scanf</tt> can be used safely, but require the programmer to take care with the selection of safe format strings, by sanitizing it before using it.
▲* Encrypt/authenticate all important data transmitted over networks. Do not attempt to implement your own encryption scheme, but use a proven one instead.
====The three rules of data security====
* All [[data]] is important until proven otherwise.
* All data is tainted until proven otherwise.
* All code is insecure until proven otherwise.
** You cannot prove the security of any code in [[userland (computing)|userland]], or, more
These three rules about data security describe how to handle any data, internally or externally sourced:
* If data is to be checked for correctness, verify that it is correct, not that it is incorrect.▼
'''All data is important until proven otherwise''' - means that all data must be verified as garbage before being destroyed.
'''All data is tainted until proven otherwise''' - means that all data must be handled in a way that does not expose the rest of the runtime environment without verifying integrity.
'''All code is insecure until proven otherwise''' - while a slight misnomer, does a good job reminding us to never assume our code is secure as bugs or [[undefined behavior]] may expose the project or system to attacks such as common [[SQL injection]] attacks.
====More Information====
▲* If data is to be checked for correctness, verify that it is correct, not that it is incorrect.
* [[Design by contract]]
* [[Assertion (computing)|Assertions]] (also called '''assertive programming''')
* Prefer [[Exception handling|exceptions]] to return codes
** Generally speaking, it is preferable{{According to
==See also==
* [[Computer security]]
== References ==
|