Content deleted Content added
Groupthink (talk | contribs) →Efficiency and Scalability: Scale and Scalability (sounds like a Jane Austen novel) |
Scottcraig (talk | contribs) |
||
Line 173:
::::Well, I guess I'm overruled. I have always learned that problems have complexity, and algorithms have scalability. But you can interchange them if you want. And if you want to introduce a new definition for "efficiency", go ahead. [[User:Scottcraig|Scottcraig]] ([[User talk:Scottcraig|talk]]) 23:44, 14 January 2008 (UTC)
:::::Please correct me if I'm wrong, but I get the sense that you're approaching this from an informal IT angle rather than a formal CS angle. I also get the sense that you're confusing the notions of "scale" and "scalability". While it is true that an algorithm's run-time (or space) will "scale up" as n increases (except of course for an O(1) algorithm) that's ''not'' the same thing as [[scalability]]. You should also realize that using complexity-notation to define efficiency is hardly a "new definition" – it's been around since the 1950's. [[User:Groupthink|Groupthink]] ([[User talk:Groupthink|talk]]) 00:06, 15 January 2008 (UTC)
::::::No, I am not trying to use IT or theoretic idiomatic language. I am trying to express the ideas in language that most people use, as that is what I see as the purpose of an encyclopedia. So I don't accept the narrow definition of scalability that applies only to computer servers.
::::::I think I see you agreeing that O(n) terminology is discussing how things "scale", although I wouldn't use quotation marks here. This is the normal use of the word. Rather, quotes are more suited to "scalability" in the IT world, as it is a more restrictive definition.
::::::The usual definition of efficiency in scientific terminology is a measure of how a process performs as a percentage of a theoretical maximum. For example, a electrical generator can have a theoretical efficiency of say 65%, and perhaps a realized efficiency of 45%.
::::::So by the very nature of the big O terminology, I would argue that complexity is a measure of how an algorithm's requirements grow as the input grows, i.e., how it scales. This idea is how someone new to the field would understand it. On the other hand, big O notation deliberately ignores multiplying factors, which relates to efficiency. An algorithm can run 100 times slower than another, and hence have 1% of the efficiency, and still have the same complexity. If you have citations where the term "efficiency" is used, then I am sure they meant some looser meaning of the term, but that does not mean it belongs in an encyclopedia. [[User:Scottcraig|Scottcraig]] ([[User talk:Scottcraig|talk]]) 00:29, 15 January 2008 (UTC)
|