Flag (programming): Difference between revisions

Content deleted Content added
No edit summary
Eman2129 (talk | contribs)
No edit summary
Line 1:
{{Merge|Flag word|date=December 2009}}
 
In [[computer programming]], '''flag''' can refer to one or more [[bit]]s that are used to store a [[binary numeral system|binary]] value or [[code]] that has an assigned meaning, but can refer to uses of other data types. Flags are typically found as members of a defined [[data structure]], such as a [[Row (database)|database record]], and the meaning of the value contained in a flag will generally be defined in relation to the data structure it is part of. In many cases, the binary value of a flag will be understood to represent one of several possible states or statuses. In other cases, the binary values may represent one or more attributes in a [[bit field]], often related to abilities or permissions, such as "can be written to" or "can be deleted". However, there are many other possible meanings that can be assigned to flag values. One common use of flags is to mark or designate data structures for future processing.
 
Within [[microprocessor]]s and other logic devices, flags are commonly used to control or indicate the intermediate or final state or outcome of different operations. Microprocessors typically have, for example, a [[status register]] that is composed of such flags, and the flags are used to indicate various post-operation conditions, such as when there has been an [[arithmetic overflow]]. The flags can be utilized in subsequent operations, such as in processing conditional [[jump instruction]]s. For example a ''je'' (Jump if Equal) instruction in the [[X86 assembly language#Programming flow|X86 assembly language]] will result in a jump if the Z (zero) flag was set by some previous operation.